2025-02-10 08:30:47.492282 | Job console starting... 2025-02-10 08:30:47.507483 | Updating repositories 2025-02-10 08:30:47.576324 | Preparing job workspace 2025-02-10 08:30:49.418118 | Running Ansible setup... 2025-02-10 08:30:54.824979 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-02-10 08:30:55.550500 | 2025-02-10 08:30:55.550659 | PLAY [Base pre] 2025-02-10 08:30:55.581960 | 2025-02-10 08:30:55.582093 | TASK [Setup log path fact] 2025-02-10 08:30:55.616031 | orchestrator | ok 2025-02-10 08:30:55.639110 | 2025-02-10 08:30:55.639259 | TASK [set-zuul-log-path-fact : Set log path for a change] 2025-02-10 08:30:55.674567 | orchestrator | skipping: Conditional result was False 2025-02-10 08:30:55.685089 | 2025-02-10 08:30:55.685227 | TASK [set-zuul-log-path-fact : Set log path for a ref update] 2025-02-10 08:30:55.749464 | orchestrator | ok 2025-02-10 08:30:55.758433 | 2025-02-10 08:30:55.758558 | TASK [set-zuul-log-path-fact : Set log path for a periodic job] 2025-02-10 08:30:55.814517 | orchestrator | skipping: Conditional result was False 2025-02-10 08:30:55.834364 | 2025-02-10 08:30:55.834561 | TASK [set-zuul-log-path-fact : Set log path for a change] 2025-02-10 08:30:55.861309 | orchestrator | skipping: Conditional result was False 2025-02-10 08:30:55.877031 | 2025-02-10 08:30:55.877190 | TASK [set-zuul-log-path-fact : Set log path for a ref update] 2025-02-10 08:30:55.902809 | orchestrator | skipping: Conditional result was False 2025-02-10 08:30:55.925284 | 2025-02-10 08:30:55.925433 | TASK [set-zuul-log-path-fact : Set log path for a periodic job] 2025-02-10 08:30:55.950835 | orchestrator | skipping: Conditional result was False 2025-02-10 08:30:55.983338 | 2025-02-10 08:30:55.983509 | TASK [emit-job-header : Print job information] 2025-02-10 08:30:56.040651 | # Job Information 2025-02-10 08:30:56.040867 | Ansible Version: 2.15.3 2025-02-10 08:30:56.040908 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-02-10 08:30:56.040940 | Pipeline: post 2025-02-10 08:30:56.040963 | Executor: 7d211f194f6a 2025-02-10 08:30:56.040983 | Triggered by: https://github.com/osism/testbed/commit/88c9a01550409e69f921ca14c30503ff015e9804 2025-02-10 08:30:56.041003 | Event ID: 4f37aa8a-e789-11ef-84dd-ee02a0248723 2025-02-10 08:30:56.048468 | 2025-02-10 08:30:56.048586 | LOOP [emit-job-header : Print node information] 2025-02-10 08:30:56.221074 | orchestrator | ok: 2025-02-10 08:30:56.221360 | orchestrator | # Node Information 2025-02-10 08:30:56.221426 | orchestrator | Inventory Hostname: orchestrator 2025-02-10 08:30:56.221474 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-02-10 08:30:56.221518 | orchestrator | Username: zuul-testbed06 2025-02-10 08:30:56.221560 | orchestrator | Distro: Debian 12.9 2025-02-10 08:30:56.221601 | orchestrator | Provider: static-testbed 2025-02-10 08:30:56.221640 | orchestrator | Label: testbed-orchestrator 2025-02-10 08:30:56.221722 | orchestrator | Product Name: OpenStack Nova 2025-02-10 08:30:56.221813 | orchestrator | Interface IP: 81.163.193.140 2025-02-10 08:30:56.247481 | 2025-02-10 08:30:56.247614 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-02-10 08:30:56.774410 | orchestrator -> localhost | changed 2025-02-10 08:30:56.784011 | 2025-02-10 08:30:56.784140 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-02-10 08:30:57.847356 | orchestrator -> localhost | changed 2025-02-10 08:30:57.869049 | 2025-02-10 08:30:57.869180 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-02-10 08:30:58.154480 | orchestrator -> localhost | ok 2025-02-10 08:30:58.163419 | 2025-02-10 08:30:58.163539 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-02-10 08:30:58.206446 | orchestrator | ok 2025-02-10 08:30:58.224157 | orchestrator | included: /var/lib/zuul/builds/0ea8ccc664f545108af0c9bbdc49281a/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-02-10 08:30:58.233164 | 2025-02-10 08:30:58.233314 | TASK [add-build-sshkey : Create Temp SSH key] 2025-02-10 08:30:59.250667 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-02-10 08:30:59.250909 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/0ea8ccc664f545108af0c9bbdc49281a/work/0ea8ccc664f545108af0c9bbdc49281a_id_rsa 2025-02-10 08:30:59.250945 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/0ea8ccc664f545108af0c9bbdc49281a/work/0ea8ccc664f545108af0c9bbdc49281a_id_rsa.pub 2025-02-10 08:30:59.250970 | orchestrator -> localhost | The key fingerprint is: 2025-02-10 08:30:59.250993 | orchestrator -> localhost | SHA256:sKfkqVh4pAfB57u0nXQvbLF2ebMLmDu67daXpFsIsXQ zuul-build-sshkey 2025-02-10 08:30:59.251016 | orchestrator -> localhost | The key's randomart image is: 2025-02-10 08:30:59.251042 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-02-10 08:30:59.251063 | orchestrator -> localhost | | | 2025-02-10 08:30:59.251083 | orchestrator -> localhost | | . | 2025-02-10 08:30:59.251103 | orchestrator -> localhost | | o . . o E | 2025-02-10 08:30:59.251122 | orchestrator -> localhost | | + + + | 2025-02-10 08:30:59.251141 | orchestrator -> localhost | | . o o S | 2025-02-10 08:30:59.251160 | orchestrator -> localhost | | = + +.+ .. | 2025-02-10 08:30:59.251178 | orchestrator -> localhost | | o * =.++o+.. | 2025-02-10 08:30:59.251197 | orchestrator -> localhost | | * * +Oo=o= | 2025-02-10 08:30:59.251217 | orchestrator -> localhost | | . + =B++o+o+ | 2025-02-10 08:30:59.251236 | orchestrator -> localhost | +----[SHA256]-----+ 2025-02-10 08:30:59.251283 | orchestrator -> localhost | ok: Runtime: 0:00:00.528440 2025-02-10 08:30:59.260473 | 2025-02-10 08:30:59.260592 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-02-10 08:30:59.294590 | orchestrator | ok 2025-02-10 08:30:59.306859 | orchestrator | included: /var/lib/zuul/builds/0ea8ccc664f545108af0c9bbdc49281a/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-02-10 08:30:59.318158 | 2025-02-10 08:30:59.318258 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-02-10 08:30:59.342782 | orchestrator | skipping: Conditional result was False 2025-02-10 08:30:59.351645 | 2025-02-10 08:30:59.351763 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-02-10 08:31:00.148359 | orchestrator | changed 2025-02-10 08:31:00.157807 | 2025-02-10 08:31:00.157928 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-02-10 08:31:00.451215 | orchestrator | ok 2025-02-10 08:31:00.500895 | 2025-02-10 08:31:00.501030 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-02-10 08:31:00.928238 | orchestrator | ok 2025-02-10 08:31:00.937729 | 2025-02-10 08:31:00.937853 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-02-10 08:31:01.367274 | orchestrator | ok 2025-02-10 08:31:01.378438 | 2025-02-10 08:31:01.378563 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-02-10 08:31:01.413383 | orchestrator | skipping: Conditional result was False 2025-02-10 08:31:01.429173 | 2025-02-10 08:31:01.429332 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-02-10 08:31:01.832499 | orchestrator -> localhost | changed 2025-02-10 08:31:01.852489 | 2025-02-10 08:31:01.852647 | TASK [add-build-sshkey : Add back temp key] 2025-02-10 08:31:02.225771 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/0ea8ccc664f545108af0c9bbdc49281a/work/0ea8ccc664f545108af0c9bbdc49281a_id_rsa (zuul-build-sshkey) 2025-02-10 08:31:02.226019 | orchestrator -> localhost | ok: Runtime: 0:00:00.015415 2025-02-10 08:31:02.235057 | 2025-02-10 08:31:02.235181 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-02-10 08:31:02.647815 | orchestrator | ok 2025-02-10 08:31:02.658125 | 2025-02-10 08:31:02.658264 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-02-10 08:31:02.693508 | orchestrator | skipping: Conditional result was False 2025-02-10 08:31:02.720092 | 2025-02-10 08:31:02.720229 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-02-10 08:31:03.132194 | orchestrator | ok 2025-02-10 08:31:03.147489 | 2025-02-10 08:31:03.147621 | TASK [validate-host : Define zuul_info_dir fact] 2025-02-10 08:31:03.191002 | orchestrator | ok 2025-02-10 08:31:03.198810 | 2025-02-10 08:31:03.198931 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-02-10 08:31:03.502103 | orchestrator -> localhost | ok 2025-02-10 08:31:03.520797 | 2025-02-10 08:31:03.520945 | TASK [validate-host : Collect information about the host] 2025-02-10 08:31:04.805623 | orchestrator | ok 2025-02-10 08:31:04.823853 | 2025-02-10 08:31:04.823987 | TASK [validate-host : Sanitize hostname] 2025-02-10 08:31:04.904045 | orchestrator | ok 2025-02-10 08:31:04.914878 | 2025-02-10 08:31:04.915037 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-02-10 08:31:05.533805 | orchestrator -> localhost | changed 2025-02-10 08:31:05.551526 | 2025-02-10 08:31:05.551735 | TASK [validate-host : Collect information about zuul worker] 2025-02-10 08:31:06.082850 | orchestrator | ok 2025-02-10 08:31:06.091065 | 2025-02-10 08:31:06.091189 | TASK [validate-host : Write out all zuul information for each host] 2025-02-10 08:31:06.660268 | orchestrator -> localhost | changed 2025-02-10 08:31:06.675213 | 2025-02-10 08:31:06.675333 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-02-10 08:31:06.941066 | orchestrator | ok 2025-02-10 08:31:06.948982 | 2025-02-10 08:31:06.949100 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-02-10 08:32:07.889916 | orchestrator | changed: 2025-02-10 08:32:07.890147 | orchestrator | .d..t...... src/ 2025-02-10 08:32:07.890185 | orchestrator | .d..t...... src/github.com/ 2025-02-10 08:32:07.890211 | orchestrator | .d..t...... src/github.com/osism/ 2025-02-10 08:32:07.890234 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-02-10 08:32:07.890256 | orchestrator | RedHat.yml 2025-02-10 08:32:07.906483 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-02-10 08:32:07.906500 | orchestrator | RedHat.yml 2025-02-10 08:32:07.906552 | orchestrator | = 1.53.0"... 2025-02-10 08:32:23.420662 | orchestrator | 08:32:23.420 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-02-10 08:32:24.438001 | orchestrator | 08:32:24.437 STDOUT terraform: - Installing hashicorp/null v3.2.3... 2025-02-10 08:32:25.232109 | orchestrator | 08:32:25.231 STDOUT terraform: - Installed hashicorp/null v3.2.3 (signed, key ID 0C0AF313E5FD9F80) 2025-02-10 08:32:26.843154 | orchestrator | 08:32:26.842 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.0.0... 2025-02-10 08:32:28.200926 | orchestrator | 08:32:28.200 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.0.0 (signed, key ID 4F80527A391BEFD2) 2025-02-10 08:32:29.089349 | orchestrator | 08:32:29.089 STDOUT terraform: - Installing hashicorp/local v2.5.2... 2025-02-10 08:32:29.897934 | orchestrator | 08:32:29.897 STDOUT terraform: - Installed hashicorp/local v2.5.2 (signed, key ID 0C0AF313E5FD9F80) 2025-02-10 08:32:29.898043 | orchestrator | 08:32:29.897 STDOUT terraform: Providers are signed by their developers. 2025-02-10 08:32:29.898056 | orchestrator | 08:32:29.897 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-02-10 08:32:29.898093 | orchestrator | 08:32:29.898 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-02-10 08:32:29.898198 | orchestrator | 08:32:29.898 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-02-10 08:32:29.898276 | orchestrator | 08:32:29.898 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-02-10 08:32:29.898513 | orchestrator | 08:32:29.898 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-02-10 08:32:29.898575 | orchestrator | 08:32:29.898 STDOUT terraform: you run "tofu init" in the future. 2025-02-10 08:32:29.898648 | orchestrator | 08:32:29.898 STDOUT terraform: OpenTofu has been successfully initialized! 2025-02-10 08:32:29.898657 | orchestrator | 08:32:29.898 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-02-10 08:32:29.898970 | orchestrator | 08:32:29.898 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-02-10 08:32:29.898986 | orchestrator | 08:32:29.898 STDOUT terraform: should now work. 2025-02-10 08:32:30.215722 | orchestrator | 08:32:29.898 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-02-10 08:32:30.215839 | orchestrator | 08:32:29.898 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-02-10 08:32:30.215851 | orchestrator | 08:32:29.898 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-02-10 08:32:30.215881 | orchestrator | 08:32:30.215 STDOUT terraform: Created and switched to workspace "ci"! 2025-02-10 08:32:30.215939 | orchestrator | 08:32:30.215 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-02-10 08:32:30.215991 | orchestrator | 08:32:30.215 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-02-10 08:32:30.216005 | orchestrator | 08:32:30.215 STDOUT terraform: for this configuration. 2025-02-10 08:32:30.459449 | orchestrator | 08:32:30.459 STDOUT terraform: ci.auto.tfvars 2025-02-10 08:32:31.557297 | orchestrator | 08:32:31.556 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-02-10 08:32:32.141151 | orchestrator | 08:32:32.140 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-02-10 08:32:32.397497 | orchestrator | 08:32:32.397 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-02-10 08:32:32.397645 | orchestrator | 08:32:32.397 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-02-10 08:32:32.397666 | orchestrator | 08:32:32.397 STDOUT terraform:  + create 2025-02-10 08:32:32.397721 | orchestrator | 08:32:32.397 STDOUT terraform:  <= read (data resources) 2025-02-10 08:32:32.397792 | orchestrator | 08:32:32.397 STDOUT terraform: OpenTofu will perform the following actions: 2025-02-10 08:32:32.397928 | orchestrator | 08:32:32.397 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-02-10 08:32:32.398008 | orchestrator | 08:32:32.397 STDOUT terraform:  # (config refers to values not yet known) 2025-02-10 08:32:32.398157 | orchestrator | 08:32:32.398 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-02-10 08:32:32.398221 | orchestrator | 08:32:32.398 STDOUT terraform:  + checksum = (known after apply) 2025-02-10 08:32:32.398295 | orchestrator | 08:32:32.398 STDOUT terraform:  + created_at = (known after apply) 2025-02-10 08:32:32.398370 | orchestrator | 08:32:32.398 STDOUT terraform:  + file = (known after apply) 2025-02-10 08:32:32.398446 | orchestrator | 08:32:32.398 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.398517 | orchestrator | 08:32:32.398 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:32.398665 | orchestrator | 08:32:32.398 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-02-10 08:32:32.398736 | orchestrator | 08:32:32.398 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-02-10 08:32:32.398781 | orchestrator | 08:32:32.398 STDOUT terraform:  + most_recent = true 2025-02-10 08:32:32.398853 | orchestrator | 08:32:32.398 STDOUT terraform:  + name = (known after apply) 2025-02-10 08:32:32.398958 | orchestrator | 08:32:32.398 STDOUT terraform:  + protected = (known after apply) 2025-02-10 08:32:32.399029 | orchestrator | 08:32:32.398 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.399115 | orchestrator | 08:32:32.399 STDOUT terraform:  + schema = (known after apply) 2025-02-10 08:32:32.399174 | orchestrator | 08:32:32.399 STDOUT terraform:  + size_bytes = (known after apply) 2025-02-10 08:32:32.399248 | orchestrator | 08:32:32.399 STDOUT terraform:  + tags = (known after apply) 2025-02-10 08:32:32.399324 | orchestrator | 08:32:32.399 STDOUT terraform:  + updated_at = (known after apply) 2025-02-10 08:32:32.399343 | orchestrator | 08:32:32.399 STDOUT terraform:  } 2025-02-10 08:32:32.399456 | orchestrator | 08:32:32.399 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-02-10 08:32:32.399526 | orchestrator | 08:32:32.399 STDOUT terraform:  # (config refers to values not yet known) 2025-02-10 08:32:32.399641 | orchestrator | 08:32:32.399 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-02-10 08:32:32.399712 | orchestrator | 08:32:32.399 STDOUT terraform:  + checksum = (known after apply) 2025-02-10 08:32:32.399782 | orchestrator | 08:32:32.399 STDOUT terraform:  + created_at = (known after apply) 2025-02-10 08:32:32.399852 | orchestrator | 08:32:32.399 STDOUT terraform:  + file = (known after apply) 2025-02-10 08:32:32.399926 | orchestrator | 08:32:32.399 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.400004 | orchestrator | 08:32:32.399 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:32.400068 | orchestrator | 08:32:32.399 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-02-10 08:32:32.400140 | orchestrator | 08:32:32.400 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-02-10 08:32:32.400185 | orchestrator | 08:32:32.400 STDOUT terraform:  + most_recent = true 2025-02-10 08:32:32.400255 | orchestrator | 08:32:32.400 STDOUT terraform:  + name = (known after apply) 2025-02-10 08:32:32.400326 | orchestrator | 08:32:32.400 STDOUT terraform:  + protected = (known after apply) 2025-02-10 08:32:32.400399 | orchestrator | 08:32:32.400 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.400473 | orchestrator | 08:32:32.400 STDOUT terraform:  + schema = (known after apply) 2025-02-10 08:32:32.400546 | orchestrator | 08:32:32.400 STDOUT terraform:  + size_bytes = (known after apply) 2025-02-10 08:32:32.400640 | orchestrator | 08:32:32.400 STDOUT terraform:  + tags = (known after apply) 2025-02-10 08:32:32.400710 | orchestrator | 08:32:32.400 STDOUT terraform:  + updated_at = (known after apply) 2025-02-10 08:32:32.400740 | orchestrator | 08:32:32.400 STDOUT terraform:  } 2025-02-10 08:32:32.400850 | orchestrator | 08:32:32.400 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-02-10 08:32:32.400915 | orchestrator | 08:32:32.400 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-02-10 08:32:32.401007 | orchestrator | 08:32:32.400 STDOUT terraform:  + content = (known after apply) 2025-02-10 08:32:32.401099 | orchestrator | 08:32:32.401 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-02-10 08:32:32.401185 | orchestrator | 08:32:32.401 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-02-10 08:32:32.401272 | orchestrator | 08:32:32.401 STDOUT terraform:  + content_md5 = (known after apply) 2025-02-10 08:32:32.401364 | orchestrator | 08:32:32.401 STDOUT terraform:  + content_sha1 = (known after apply) 2025-02-10 08:32:32.401444 | orchestrator | 08:32:32.401 STDOUT terraform:  + content_sha256 = (known after apply) 2025-02-10 08:32:32.401534 | orchestrator | 08:32:32.401 STDOUT terraform:  + content_sha512 = (known after apply) 2025-02-10 08:32:32.401610 | orchestrator | 08:32:32.401 STDOUT terraform:  + directory_permission = "0777" 2025-02-10 08:32:32.401700 | orchestrator | 08:32:32.401 STDOUT terraform:  + file_permission = "0644" 2025-02-10 08:32:32.401780 | orchestrator | 08:32:32.401 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-02-10 08:32:32.401853 | orchestrator | 08:32:32.401 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.401869 | orchestrator | 08:32:32.401 STDOUT terraform:  } 2025-02-10 08:32:32.401930 | orchestrator | 08:32:32.401 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-02-10 08:32:32.401981 | orchestrator | 08:32:32.401 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-02-10 08:32:32.402093 | orchestrator | 08:32:32.401 STDOUT terraform:  + content = (known after apply) 2025-02-10 08:32:32.402162 | orchestrator | 08:32:32.402 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-02-10 08:32:32.402232 | orchestrator | 08:32:32.402 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-02-10 08:32:32.402304 | orchestrator | 08:32:32.402 STDOUT terraform:  + content_md5 = (known after apply) 2025-02-10 08:32:32.402374 | orchestrator | 08:32:32.402 STDOUT terraform:  + content_sha1 = (known after apply) 2025-02-10 08:32:32.402443 | orchestrator | 08:32:32.402 STDOUT terraform:  + content_sha256 = (known after apply) 2025-02-10 08:32:32.402513 | orchestrator | 08:32:32.402 STDOUT terraform:  + content_sha512 = (known after apply) 2025-02-10 08:32:32.402562 | orchestrator | 08:32:32.402 STDOUT terraform:  + directory_permission = "0777" 2025-02-10 08:32:32.402656 | orchestrator | 08:32:32.402 STDOUT terraform:  + file_permission = "0644" 2025-02-10 08:32:32.402722 | orchestrator | 08:32:32.402 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-02-10 08:32:32.402794 | orchestrator | 08:32:32.402 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.402827 | orchestrator | 08:32:32.402 STDOUT terraform:  } 2025-02-10 08:32:32.402874 | orchestrator | 08:32:32.402 STDOUT terraform:  # local_file.inventory will be created 2025-02-10 08:32:32.402923 | orchestrator | 08:32:32.402 STDOUT terraform:  + resource "local_file" "inventory" { 2025-02-10 08:32:32.402995 | orchestrator | 08:32:32.402 STDOUT terraform:  + content = (known after apply) 2025-02-10 08:32:32.403064 | orchestrator | 08:32:32.402 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-02-10 08:32:32.403129 | orchestrator | 08:32:32.403 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-02-10 08:32:32.403189 | orchestrator | 08:32:32.403 STDOUT terraform:  + content_md5 = (known after apply) 2025-02-10 08:32:32.403248 | orchestrator | 08:32:32.403 STDOUT terraform:  + content_sha1 = (known after apply) 2025-02-10 08:32:32.403307 | orchestrator | 08:32:32.403 STDOUT terraform:  + content_sha256 = (known after apply) 2025-02-10 08:32:32.403365 | orchestrator | 08:32:32.403 STDOUT terraform:  + content_sha512 = (known after apply) 2025-02-10 08:32:32.403408 | orchestrator | 08:32:32.403 STDOUT terraform:  + directory_permission = "0777" 2025-02-10 08:32:32.403448 | orchestrator | 08:32:32.403 STDOUT terraform:  + file_permission = "0644" 2025-02-10 08:32:32.403502 | orchestrator | 08:32:32.403 STDOUT terraform:  + filename = "inventory.ci" 2025-02-10 08:32:32.403562 | orchestrator | 08:32:32.403 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.403618 | orchestrator | 08:32:32.403 STDOUT terraform:  } 2025-02-10 08:32:32.403697 | orchestrator | 08:32:32.403 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-02-10 08:32:32.403748 | orchestrator | 08:32:32.403 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-02-10 08:32:32.403799 | orchestrator | 08:32:32.403 STDOUT terraform:  + content = (sensitive value) 2025-02-10 08:32:32.403860 | orchestrator | 08:32:32.403 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-02-10 08:32:32.403924 | orchestrator | 08:32:32.403 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-02-10 08:32:32.403980 | orchestrator | 08:32:32.403 STDOUT terraform:  + content_md5 = (known after apply) 2025-02-10 08:32:32.404041 | orchestrator | 08:32:32.403 STDOUT terraform:  + content_sha1 = (known after apply) 2025-02-10 08:32:32.404097 | orchestrator | 08:32:32.404 STDOUT terraform:  + content_sha256 = (known after apply) 2025-02-10 08:32:32.404158 | orchestrator | 08:32:32.404 STDOUT terraform:  + content_sha512 = (known after apply) 2025-02-10 08:32:32.404198 | orchestrator | 08:32:32.404 STDOUT terraform:  + directory_permission = "0700" 2025-02-10 08:32:32.404249 | orchestrator | 08:32:32.404 STDOUT terraform:  + file_permission = "0600" 2025-02-10 08:32:32.404291 | orchestrator | 08:32:32.404 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-02-10 08:32:32.404352 | orchestrator | 08:32:32.404 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.404368 | orchestrator | 08:32:32.404 STDOUT terraform:  } 2025-02-10 08:32:32.404421 | orchestrator | 08:32:32.404 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-02-10 08:32:32.404473 | orchestrator | 08:32:32.404 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-02-10 08:32:32.404511 | orchestrator | 08:32:32.404 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.404527 | orchestrator | 08:32:32.404 STDOUT terraform:  } 2025-02-10 08:32:32.404654 | orchestrator | 08:32:32.404 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-02-10 08:32:32.404739 | orchestrator | 08:32:32.404 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-02-10 08:32:32.404794 | orchestrator | 08:32:32.404 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:32.404829 | orchestrator | 08:32:32.404 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:32.404883 | orchestrator | 08:32:32.404 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.404936 | orchestrator | 08:32:32.404 STDOUT terraform:  + image_id = (known after apply) 2025-02-10 08:32:32.404989 | orchestrator | 08:32:32.404 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:32.405056 | orchestrator | 08:32:32.404 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-02-10 08:32:32.405109 | orchestrator | 08:32:32.405 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.405145 | orchestrator | 08:32:32.405 STDOUT terraform:  + size = 80 2025-02-10 08:32:32.405179 | orchestrator | 08:32:32.405 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:32.405205 | orchestrator | 08:32:32.405 STDOUT terraform:  } 2025-02-10 08:32:32.405275 | orchestrator | 08:32:32.405 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-02-10 08:32:32.405349 | orchestrator | 08:32:32.405 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-02-10 08:32:32.405403 | orchestrator | 08:32:32.405 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:32.405437 | orchestrator | 08:32:32.405 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:32.405493 | orchestrator | 08:32:32.405 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.405546 | orchestrator | 08:32:32.405 STDOUT terraform:  + image_id = (known after apply) 2025-02-10 08:32:32.405646 | orchestrator | 08:32:32.405 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:32.405705 | orchestrator | 08:32:32.405 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-02-10 08:32:32.405761 | orchestrator | 08:32:32.405 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.405794 | orchestrator | 08:32:32.405 STDOUT terraform:  + size = 80 2025-02-10 08:32:32.405827 | orchestrator | 08:32:32.405 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:32.405842 | orchestrator | 08:32:32.405 STDOUT terraform:  } 2025-02-10 08:32:32.405921 | orchestrator | 08:32:32.405 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-02-10 08:32:32.405993 | orchestrator | 08:32:32.405 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-02-10 08:32:32.406067 | orchestrator | 08:32:32.405 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:32.406086 | orchestrator | 08:32:32.406 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:32.406145 | orchestrator | 08:32:32.406 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.406194 | orchestrator | 08:32:32.406 STDOUT terraform:  + image_id = (known after apply) 2025-02-10 08:32:32.406243 | orchestrator | 08:32:32.406 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:32.406303 | orchestrator | 08:32:32.406 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-02-10 08:32:32.406354 | orchestrator | 08:32:32.406 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.406385 | orchestrator | 08:32:32.406 STDOUT terraform:  + size = 80 2025-02-10 08:32:32.406417 | orchestrator | 08:32:32.406 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:32.406431 | orchestrator | 08:32:32.406 STDOUT terraform:  } 2025-02-10 08:32:32.406510 | orchestrator | 08:32:32.406 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-02-10 08:32:32.406604 | orchestrator | 08:32:32.406 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-02-10 08:32:32.406645 | orchestrator | 08:32:32.406 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:32.406679 | orchestrator | 08:32:32.406 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:32.406729 | orchestrator | 08:32:32.406 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.406781 | orchestrator | 08:32:32.406 STDOUT terraform:  + image_id = (known after apply) 2025-02-10 08:32:32.406831 | orchestrator | 08:32:32.406 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:32.406891 | orchestrator | 08:32:32.406 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-02-10 08:32:32.406940 | orchestrator | 08:32:32.406 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.406973 | orchestrator | 08:32:32.406 STDOUT terraform:  + size = 80 2025-02-10 08:32:32.407006 | orchestrator | 08:32:32.406 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:32.407021 | orchestrator | 08:32:32.406 STDOUT terraform:  } 2025-02-10 08:32:32.407095 | orchestrator | 08:32:32.407 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-02-10 08:32:32.407172 | orchestrator | 08:32:32.407 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-02-10 08:32:32.407215 | orchestrator | 08:32:32.407 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:32.407246 | orchestrator | 08:32:32.407 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:32.407295 | orchestrator | 08:32:32.407 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.407345 | orchestrator | 08:32:32.407 STDOUT terraform:  + image_id = (known after apply) 2025-02-10 08:32:32.407393 | orchestrator | 08:32:32.407 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:32.407456 | orchestrator | 08:32:32.407 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-02-10 08:32:32.407505 | orchestrator | 08:32:32.407 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.407554 | orchestrator | 08:32:32.407 STDOUT terraform:  + size = 80 2025-02-10 08:32:32.407611 | orchestrator | 08:32:32.407 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:32.407684 | orchestrator | 08:32:32.407 STDOUT terraform:  } 2025-02-10 08:32:32.407701 | orchestrator | 08:32:32.407 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-02-10 08:32:32.407756 | orchestrator | 08:32:32.407 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-02-10 08:32:32.407805 | orchestrator | 08:32:32.407 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:32.407837 | orchestrator | 08:32:32.407 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:32.407888 | orchestrator | 08:32:32.407 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.407935 | orchestrator | 08:32:32.407 STDOUT terraform:  + image_id = (known after apply) 2025-02-10 08:32:32.407983 | orchestrator | 08:32:32.407 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:32.408045 | orchestrator | 08:32:32.407 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-02-10 08:32:32.408096 | orchestrator | 08:32:32.408 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.408120 | orchestrator | 08:32:32.408 STDOUT terraform:  + size = 80 2025-02-10 08:32:32.408153 | orchestrator | 08:32:32.408 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:32.408175 | orchestrator | 08:32:32.408 STDOUT terraform:  } 2025-02-10 08:32:32.408242 | orchestrator | 08:32:32.408 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-02-10 08:32:32.408314 | orchestrator | 08:32:32.408 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-02-10 08:32:32.408362 | orchestrator | 08:32:32.408 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:32.408397 | orchestrator | 08:32:32.408 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:32.408462 | orchestrator | 08:32:32.408 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.408493 | orchestrator | 08:32:32.408 STDOUT terraform:  + image_id = (known after apply) 2025-02-10 08:32:32.408562 | orchestrator | 08:32:32.408 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:32.408876 | orchestrator | 08:32:32.408 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-02-10 08:32:32.409014 | orchestrator | 08:32:32.408 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.409037 | orchestrator | 08:32:32.408 STDOUT terraform:  + size = 80 2025-02-10 08:32:32.409055 | orchestrator | 08:32:32.408 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:32.409071 | orchestrator | 08:32:32.408 STDOUT terraform:  } 2025-02-10 08:32:32.409087 | orchestrator | 08:32:32.408 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-02-10 08:32:32.409105 | orchestrator | 08:32:32.408 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-10 08:32:32.409128 | orchestrator | 08:32:32.408 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:32.409144 | orchestrator | 08:32:32.408 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:32.409172 | orchestrator | 08:32:32.408 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.409187 | orchestrator | 08:32:32.408 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:32.409202 | orchestrator | 08:32:32.408 STDOUT terraform:  + name = "testbed-volume-0-node-0" 2025-02-10 08:32:32.409217 | orchestrator | 08:32:32.409 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.409232 | orchestrator | 08:32:32.409 STDOUT terraform:  + size = 20 2025-02-10 08:32:32.409252 | orchestrator | 08:32:32.409 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:32.409268 | orchestrator | 08:32:32.409 STDOUT terraform:  } 2025-02-10 08:32:32.409287 | orchestrator | 08:32:32.409 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-02-10 08:32:32.409365 | orchestrator | 08:32:32.409 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-10 08:32:32.409386 | orchestrator | 08:32:32.409 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:32.409405 | orchestrator | 08:32:32.409 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:32.409485 | orchestrator | 08:32:32.409 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.409535 | orchestrator | 08:32:32.409 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:32.409654 | orchestrator | 08:32:32.409 STDOUT terraform:  + name = "testbed-volume-1-node-1" 2025-02-10 08:32:32.409675 | orchestrator | 08:32:32.409 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.409694 | orchestrator | 08:32:32.409 STDOUT terraform:  + size = 20 2025-02-10 08:32:32.409762 | orchestrator | 08:32:32.409 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:32.409833 | orchestrator | 08:32:32.409 STDOUT terraform:  } 2025-02-10 08:32:32.409854 | orchestrator | 08:32:32.409 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-02-10 08:32:32.409877 | orchestrator | 08:32:32.409 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-10 08:32:32.409956 | orchestrator | 08:32:32.409 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:32.410059 | orchestrator | 08:32:32.409 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:32.410086 | orchestrator | 08:32:32.409 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.410166 | orchestrator | 08:32:32.410 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:32.410187 | orchestrator | 08:32:32.410 STDOUT terraform:  + name = "testbed-volume-2-node-2" 2025-02-10 08:32:32.410203 | orchestrator | 08:32:32.410 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.410221 | orchestrator | 08:32:32.410 STDOUT terraform:  + size = 20 2025-02-10 08:32:32.410244 | orchestrator | 08:32:32.410 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:32.410263 | orchestrator | 08:32:32.410 STDOUT terraform:  } 2025-02-10 08:32:32.410340 | orchestrator | 08:32:32.410 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-02-10 08:32:32.410406 | orchestrator | 08:32:32.410 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-10 08:32:32.410426 | orchestrator | 08:32:32.410 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:32.410445 | orchestrator | 08:32:32.410 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:32.410515 | orchestrator | 08:32:32.410 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.410535 | orchestrator | 08:32:32.410 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:32.410630 | orchestrator | 08:32:32.410 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-02-10 08:32:32.410651 | orchestrator | 08:32:32.410 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.410669 | orchestrator | 08:32:32.410 STDOUT terraform:  + size = 20 2025-02-10 08:32:32.410688 | orchestrator | 08:32:32.410 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:32.410707 | orchestrator | 08:32:32.410 STDOUT terraform:  } 2025-02-10 08:32:32.410788 | orchestrator | 08:32:32.410 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-02-10 08:32:32.410842 | orchestrator | 08:32:32.410 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-10 08:32:32.410874 | orchestrator | 08:32:32.410 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:32.410893 | orchestrator | 08:32:32.410 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:32.410949 | orchestrator | 08:32:32.410 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.410970 | orchestrator | 08:32:32.410 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:32.411044 | orchestrator | 08:32:32.410 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-02-10 08:32:32.411078 | orchestrator | 08:32:32.411 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.411107 | orchestrator | 08:32:32.411 STDOUT terraform:  + size = 20 2025-02-10 08:32:32.411126 | orchestrator | 08:32:32.411 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:32.411257 | orchestrator | 08:32:32.411 STDOUT terraform:  } 2025-02-10 08:32:32.411282 | orchestrator | 08:32:32.411 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-02-10 08:32:32.411354 | orchestrator | 08:32:32.411 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-10 08:32:32.411375 | orchestrator | 08:32:32.411 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:32.411423 | orchestrator | 08:32:32.411 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:32.411444 | orchestrator | 08:32:32.411 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.411535 | orchestrator | 08:32:32.411 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:32.411555 | orchestrator | 08:32:32.411 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-02-10 08:32:32.411644 | orchestrator | 08:32:32.411 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.411662 | orchestrator | 08:32:32.411 STDOUT terraform:  + size = 20 2025-02-10 08:32:32.411685 | orchestrator | 08:32:32.411 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:32.411956 | orchestrator | 08:32:32.411 STDOUT terraform:  } 2025-02-10 08:32:32.412115 | orchestrator | 08:32:32.411 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-02-10 08:32:32.412142 | orchestrator | 08:32:32.411 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-10 08:32:32.412159 | orchestrator | 08:32:32.411 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:32.412174 | orchestrator | 08:32:32.411 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:32.412189 | orchestrator | 08:32:32.411 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.412204 | orchestrator | 08:32:32.411 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:32.412218 | orchestrator | 08:32:32.411 STDOUT terraform:  + name = "testbed-volume-6-node-0" 2025-02-10 08:32:32.412232 | orchestrator | 08:32:32.411 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.412246 | orchestrator | 08:32:32.412 STDOUT terraform:  + size = 20 2025-02-10 08:32:32.412277 | orchestrator | 08:32:32.412 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:32.412319 | orchestrator | 08:32:32.412 STDOUT terraform:  } 2025-02-10 08:32:32.412334 | orchestrator | 08:32:32.412 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-02-10 08:32:32.412349 | orchestrator | 08:32:32.412 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-10 08:32:32.412368 | orchestrator | 08:32:32.412 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:32.412383 | orchestrator | 08:32:32.412 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:32.412398 | orchestrator | 08:32:32.412 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.412417 | orchestrator | 08:32:32.412 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:32.412435 | orchestrator | 08:32:32.412 STDOUT terraform:  + name = "testbed-volume-7-node-1" 2025-02-10 08:32:32.412518 | orchestrator | 08:32:32.412 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.412536 | orchestrator | 08:32:32.412 STDOUT terraform:  + size = 20 2025-02-10 08:32:32.412551 | orchestrator | 08:32:32.412 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:32.412569 | orchestrator | 08:32:32.412 STDOUT terraform:  } 2025-02-10 08:32:32.412637 | orchestrator | 08:32:32.412 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-02-10 08:32:32.412691 | orchestrator | 08:32:32.412 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-10 08:32:32.412712 | orchestrator | 08:32:32.412 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:32.412741 | orchestrator | 08:32:32.412 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:32.412760 | orchestrator | 08:32:32.412 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.412779 | orchestrator | 08:32:32.412 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:32.412852 | orchestrator | 08:32:32.412 STDOUT terraform:  + name = "testbed-volume-8-node-2" 2025-02-10 08:32:32.412872 | orchestrator | 08:32:32.412 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.412891 | orchestrator | 08:32:32.412 STDOUT terraform:  + size = 20 2025-02-10 08:32:32.412907 | orchestrator | 08:32:32.412 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:32.412925 | orchestrator | 08:32:32.412 STDOUT terraform:  } 2025-02-10 08:32:32.413096 | orchestrator | 08:32:32.412 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[9] will be created 2025-02-10 08:32:32.413139 | orchestrator | 08:32:32.412 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-10 08:32:32.413158 | orchestrator | 08:32:32.413 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:32.413180 | orchestrator | 08:32:32.413 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:32.413242 | orchestrator | 08:32:32.413 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.413259 | orchestrator | 08:32:32.413 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:32.413276 | orchestrator | 08:32:32.413 STDOUT terraform:  + name = "testbed-volume-9-node-3" 2025-02-10 08:32:32.413303 | orchestrator | 08:32:32.413 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.413317 | orchestrator | 08:32:32.413 STDOUT terraform:  + size = 20 2025-02-10 08:32:32.413334 | orchestrator | 08:32:32.413 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:32.413393 | orchestrator | 08:32:32.413 STDOUT terraform:  } 2025-02-10 08:32:32.413410 | orchestrator | 08:32:32.413 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[10] will be created 2025-02-10 08:32:32.413427 | orchestrator | 08:32:32.413 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-10 08:32:32.413484 | orchestrator | 08:32:32.413 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:32.413541 | orchestrator | 08:32:32.413 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:32.413561 | orchestrator | 08:32:32.413 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.413636 | orchestrator | 08:32:32.413 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:32.413657 | orchestrator | 08:32:32.413 STDOUT terraform:  + name = "testbed-volume-10-node-4" 2025-02-10 08:32:32.413671 | orchestrator | 08:32:32.413 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.413688 | orchestrator | 08:32:32.413 STDOUT terraform:  + size = 20 2025-02-10 08:32:32.413702 | orchestrator | 08:32:32.413 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:32.413718 | orchestrator | 08:32:32.413 STDOUT terraform:  } 2025-02-10 08:32:32.413792 | orchestrator | 08:32:32.413 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[11] will be created 2025-02-10 08:32:32.413862 | orchestrator | 08:32:32.413 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-10 08:32:32.413922 | orchestrator | 08:32:32.413 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:32.413985 | orchestrator | 08:32:32.413 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:32.414004 | orchestrator | 08:32:32.413 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.414095 | orchestrator | 08:32:32.413 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:32.414117 | orchestrator | 08:32:32.413 STDOUT terraform:  + name = "testbed-volume-11-node-5" 2025-02-10 08:32:32.414131 | orchestrator | 08:32:32.414 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.414148 | orchestrator | 08:32:32.414 STDOUT terraform:  + size = 20 2025-02-10 08:32:32.414165 | orchestrator | 08:32:32.414 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:32.414248 | orchestrator | 08:32:32.414 STDOUT terraform:  } 2025-02-10 08:32:32.414268 | orchestrator | 08:32:32.414 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[12] will be created 2025-02-10 08:32:32.414323 | orchestrator | 08:32:32.414 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-10 08:32:32.414341 | orchestrator | 08:32:32.414 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:32.414368 | orchestrator | 08:32:32.414 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:32.414416 | orchestrator | 08:32:32.414 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.414478 | orchestrator | 08:32:32.414 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:32.414544 | orchestrator | 08:32:32.414 STDOUT terraform:  + name = "testbed-volume-12-node-0" 2025-02-10 08:32:32.414561 | orchestrator | 08:32:32.414 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.414621 | orchestrator | 08:32:32.414 STDOUT terraform:  + size = 20 2025-02-10 08:32:32.414637 | orchestrator | 08:32:32.414 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:32.414654 | orchestrator | 08:32:32.414 STDOUT terraform:  } 2025-02-10 08:32:32.414719 | orchestrator | 08:32:32.414 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[13] will be created 2025-02-10 08:32:32.414790 | orchestrator | 08:32:32.414 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-10 08:32:32.414808 | orchestrator | 08:32:32.414 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:32.414825 | orchestrator | 08:32:32.414 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:32.414889 | orchestrator | 08:32:32.414 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.414907 | orchestrator | 08:32:32.414 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:32.414979 | orchestrator | 08:32:32.414 STDOUT terraform:  + name = "testbed-volume-13-node-1" 2025-02-10 08:32:32.414997 | orchestrator | 08:32:32.414 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.415013 | orchestrator | 08:32:32.414 STDOUT terraform:  + size = 20 2025-02-10 08:32:32.415059 | orchestrator | 08:32:32.415 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:32.415077 | orchestrator | 08:32:32.415 STDOUT terraform:  } 2025-02-10 08:32:32.415156 | orchestrator | 08:32:32.415 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[14] will be created 2025-02-10 08:32:32.415213 | orchestrator | 08:32:32.415 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-10 08:32:32.415232 | orchestrator | 08:32:32.415 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:32.415248 | orchestrator | 08:32:32.415 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:32.415311 | orchestrator | 08:32:32.415 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.415329 | orchestrator | 08:32:32.415 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:32.415402 | orchestrator | 08:32:32.415 STDOUT terraform:  + name = "testbed-volume-14-node-2" 2025-02-10 08:32:32.415421 | orchestrator | 08:32:32.415 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.415438 | orchestrator | 08:32:32.415 STDOUT terraform:  + size = 20 2025-02-10 08:32:32.415507 | orchestrator | 08:32:32.415 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:32.415563 | orchestrator | 08:32:32.415 STDOUT terraform:  } 2025-02-10 08:32:32.415602 | orchestrator | 08:32:32.415 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[15] will be created 2025-02-10 08:32:32.415672 | orchestrator | 08:32:32.415 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-10 08:32:32.415691 | orchestrator | 08:32:32.415 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:32.415733 | orchestrator | 08:32:32.415 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:32.415752 | orchestrator | 08:32:32.415 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.415819 | orchestrator | 08:32:32.415 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:32.415838 | orchestrator | 08:32:32.415 STDOUT terraform:  + name = "testbed-volume-15-node-3" 2025-02-10 08:32:32.415852 | orchestrator | 08:32:32.415 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.415868 | orchestrator | 08:32:32.415 STDOUT terraform:  + size = 20 2025-02-10 08:32:32.415882 | orchestrator | 08:32:32.415 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:32.415897 | orchestrator | 08:32:32.415 STDOUT terraform:  } 2025-02-10 08:32:32.415962 | orchestrator | 08:32:32.415 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[16] will be created 2025-02-10 08:32:32.416011 | orchestrator | 08:32:32.415 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-10 08:32:32.416030 | orchestrator | 08:32:32.415 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:32.416047 | orchestrator | 08:32:32.416 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:32.416094 | orchestrator | 08:32:32.416 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.416114 | orchestrator | 08:32:32.416 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:32.416174 | orchestrator | 08:32:32.416 STDOUT terraform:  + name = "testbed-volume-16-node-4" 2025-02-10 08:32:32.416193 | orchestrator | 08:32:32.416 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.416210 | orchestrator | 08:32:32.416 STDOUT terraform:  + size = 20 2025-02-10 08:32:32.416228 | orchestrator | 08:32:32.416 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:32.416247 | orchestrator | 08:32:32.416 STDOUT terraform:  } 2025-02-10 08:32:32.416320 | orchestrator | 08:32:32.416 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[17] will be created 2025-02-10 08:32:32.416371 | orchestrator | 08:32:32.416 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-10 08:32:32.416390 | orchestrator | 08:32:32.416 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:32.416407 | orchestrator | 08:32:32.416 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:32.416453 | orchestrator | 08:32:32.416 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.416471 | orchestrator | 08:32:32.416 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:32.416535 | orchestrator | 08:32:32.416 STDOUT terraform:  + name = "testbed-volume-17-node-5" 2025-02-10 08:32:32.416554 | orchestrator | 08:32:32.416 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.416622 | orchestrator | 08:32:32.416 STDOUT terraform:  + size = 20 2025-02-10 08:32:32.416641 | orchestrator | 08:32:32.416 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:32.416659 | orchestrator | 08:32:32.416 STDOUT terraform:  } 2025-02-10 08:32:32.416723 | orchestrator | 08:32:32.416 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-02-10 08:32:32.416753 | orchestrator | 08:32:32.416 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-02-10 08:32:32.416778 | orchestrator | 08:32:32.416 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-02-10 08:32:32.416802 | orchestrator | 08:32:32.416 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-02-10 08:32:32.416826 | orchestrator | 08:32:32.416 STDOUT terraform:  + all_metadata = (known after apply) 2025-02-10 08:32:32.416852 | orchestrator | 08:32:32.416 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:32.416879 | orchestrator | 08:32:32.416 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:32.416906 | orchestrator | 08:32:32.416 STDOUT terraform:  + config_drive = true 2025-02-10 08:32:32.416973 | orchestrator | 08:32:32.416 STDOUT terraform:  + created = (known after apply) 2025-02-10 08:32:32.416994 | orchestrator | 08:32:32.416 STDOUT terraform:  + flavor_id = (known after apply) 2025-02-10 08:32:32.417043 | orchestrator | 08:32:32.416 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-02-10 08:32:32.417106 | orchestrator | 08:32:32.417 STDOUT terraform:  + force_delete = false 2025-02-10 08:32:32.417128 | orchestrator | 08:32:32.417 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.417187 | orchestrator | 08:32:32.417 STDOUT terraform:  + image_id = (known after apply) 2025-02-10 08:32:32.417208 | orchestrator | 08:32:32.417 STDOUT terraform:  + image_name = (known after apply) 2025-02-10 08:32:32.417222 | orchestrator | 08:32:32.417 STDOUT terraform:  + key_pair = "testbed" 2025-02-10 08:32:32.417238 | orchestrator | 08:32:32.417 STDOUT terraform:  + name = "testbed-manager" 2025-02-10 08:32:32.417255 | orchestrator | 08:32:32.417 STDOUT terraform:  + power_state = "active" 2025-02-10 08:32:32.417316 | orchestrator | 08:32:32.417 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.417334 | orchestrator | 08:32:32.417 STDOUT terraform:  + security_groups = (known after apply) 2025-02-10 08:32:32.417351 | orchestrator | 08:32:32.417 STDOUT terraform:  + stop_before_destroy = false 2025-02-10 08:32:32.417422 | orchestrator | 08:32:32.417 STDOUT terraform:  + updated = (known after apply) 2025-02-10 08:32:32.417445 | orchestrator | 08:32:32.417 STDOUT terraform:  + user_data = (known after apply) 2025-02-10 08:32:32.417462 | orchestrator | 08:32:32.417 STDOUT terraform:  + block_device { 2025-02-10 08:32:32.417497 | orchestrator | 08:32:32.417 STDOUT terraform:  + boot_index = 0 2025-02-10 08:32:32.417557 | orchestrator | 08:32:32.417 STDOUT terraform:  + delete_on_termination = false 2025-02-10 08:32:32.417618 | orchestrator | 08:32:32.417 STDOUT terraform:  + destination_type = "volume" 2025-02-10 08:32:32.417649 | orchestrator | 08:32:32.417 STDOUT terraform:  + multiattach = false 2025-02-10 08:32:32.417663 | orchestrator | 08:32:32.417 STDOUT terraform:  + source_type = "volume" 2025-02-10 08:32:32.417680 | orchestrator | 08:32:32.417 STDOUT terraform:  + uuid = (known after apply) 2025-02-10 08:32:32.417702 | orchestrator | 08:32:32.417 STDOUT terraform:  } 2025-02-10 08:32:32.417719 | orchestrator | 08:32:32.417 STDOUT terraform:  + network { 2025-02-10 08:32:32.417766 | orchestrator | 08:32:32.417 STDOUT terraform:  + access_network = false 2025-02-10 08:32:32.417784 | orchestrator | 08:32:32.417 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-02-10 08:32:32.417847 | orchestrator | 08:32:32.417 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-02-10 08:32:32.417866 | orchestrator | 08:32:32.417 STDOUT terraform:  + mac = (known after apply) 2025-02-10 08:32:32.417910 | orchestrator | 08:32:32.417 STDOUT terraform:  + name = (known after apply) 2025-02-10 08:32:32.417931 | orchestrator | 08:32:32.417 STDOUT terraform:  + port = (known after apply) 2025-02-10 08:32:32.417947 | orchestrator | 08:32:32.417 STDOUT terraform:  + uuid = (known after apply) 2025-02-10 08:32:32.417964 | orchestrator | 08:32:32.417 STDOUT terraform:  } 2025-02-10 08:32:32.418047 | orchestrator | 08:32:32.417 STDOUT terraform:  } 2025-02-10 08:32:32.418069 | orchestrator | 08:32:32.417 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-02-10 08:32:32.418086 | orchestrator | 08:32:32.418 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-02-10 08:32:32.418148 | orchestrator | 08:32:32.418 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-02-10 08:32:32.418167 | orchestrator | 08:32:32.418 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-02-10 08:32:32.418226 | orchestrator | 08:32:32.418 STDOUT terraform:  + all_metadata = (known after apply) 2025-02-10 08:32:32.418245 | orchestrator | 08:32:32.418 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:32.418292 | orchestrator | 08:32:32.418 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:32.418353 | orchestrator | 08:32:32.418 STDOUT terraform:  + config_drive = true 2025-02-10 08:32:32.418374 | orchestrator | 08:32:32.418 STDOUT terraform:  + created = (known after apply) 2025-02-10 08:32:32.418428 | orchestrator | 08:32:32.418 STDOUT terraform:  + flavor_id = (known after apply) 2025-02-10 08:32:32.418448 | orchestrator | 08:32:32.418 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-02-10 08:32:32.418495 | orchestrator | 08:32:32.418 STDOUT terraform:  + force_delete = false 2025-02-10 08:32:32.418513 | orchestrator | 08:32:32.418 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.418570 | orchestrator | 08:32:32.418 STDOUT terraform:  + image_id = (known after apply) 2025-02-10 08:32:32.418607 | orchestrator | 08:32:32.418 STDOUT terraform:  + image_name = (known after apply) 2025-02-10 08:32:32.418662 | orchestrator | 08:32:32.418 STDOUT terraform:  + key_pair = "testbed" 2025-02-10 08:32:32.418699 | orchestrator | 08:32:32.418 STDOUT terraform:  + name = "testbed-node-0" 2025-02-10 08:32:32.418713 | orchestrator | 08:32:32.418 STDOUT terraform:  + power_state = "active" 2025-02-10 08:32:32.418730 | orchestrator | 08:32:32.418 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.418747 | orchestrator | 08:32:32.418 STDOUT terraform:  + security_groups = (known after apply) 2025-02-10 08:32:32.418764 | orchestrator | 08:32:32.418 STDOUT terraform:  + stop_before_destroy = false 2025-02-10 08:32:32.418828 | orchestrator | 08:32:32.418 STDOUT terraform:  + updated = (known after apply) 2025-02-10 08:32:32.418877 | orchestrator | 08:32:32.418 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-02-10 08:32:32.418893 | orchestrator | 08:32:32.418 STDOUT terraform:  + block_device { 2025-02-10 08:32:32.418909 | orchestrator | 08:32:32.418 STDOUT terraform:  + boot_index = 0 2025-02-10 08:32:32.418933 | orchestrator | 08:32:32.418 STDOUT terraform:  + delete_on_termination = false 2025-02-10 08:32:32.418950 | orchestrator | 08:32:32.418 STDOUT terraform:  + destination_type = "volume" 2025-02-10 08:32:32.419000 | orchestrator | 08:32:32.418 STDOUT terraform:  + multiattach = false 2025-02-10 08:32:32.419018 | orchestrator | 08:32:32.418 STDOUT terraform:  + source_type = "volume" 2025-02-10 08:32:32.419077 | orchestrator | 08:32:32.419 STDOUT terraform:  + uuid = (known after apply) 2025-02-10 08:32:32.419092 | orchestrator | 08:32:32.419 STDOUT terraform:  } 2025-02-10 08:32:32.419106 | orchestrator | 08:32:32.419 STDOUT terraform:  + network { 2025-02-10 08:32:32.419126 | orchestrator | 08:32:32.419 STDOUT terraform:  + access_network = false 2025-02-10 08:32:32.419178 | orchestrator | 08:32:32.419 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-02-10 08:32:32.419197 | orchestrator | 08:32:32.419 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-02-10 08:32:32.419241 | orchestrator | 08:32:32.419 STDOUT terraform:  + mac = (known after apply) 2025-02-10 08:32:32.419259 | orchestrator | 08:32:32.419 STDOUT terraform:  + name = (known after apply) 2025-02-10 08:32:32.419309 | orchestrator | 08:32:32.419 STDOUT terraform:  + port = (known after apply) 2025-02-10 08:32:32.419329 | orchestrator | 08:32:32.419 STDOUT terraform:  + uuid = (known after apply) 2025-02-10 08:32:32.419391 | orchestrator | 08:32:32.419 STDOUT terraform:  } 2025-02-10 08:32:32.419407 | orchestrator | 08:32:32.419 STDOUT terraform:  } 2025-02-10 08:32:32.419425 | orchestrator | 08:32:32.419 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-02-10 08:32:32.419440 | orchestrator | 08:32:32.419 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-02-10 08:32:32.419456 | orchestrator | 08:32:32.419 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-02-10 08:32:32.419473 | orchestrator | 08:32:32.419 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-02-10 08:32:32.419531 | orchestrator | 08:32:32.419 STDOUT terraform:  + all_metadata = (known after apply) 2025-02-10 08:32:32.419558 | orchestrator | 08:32:32.419 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:32.419578 | orchestrator | 08:32:32.419 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:32.419613 | orchestrator | 08:32:32.419 STDOUT terraform:  + config_drive = true 2025-02-10 08:32:32.419631 | orchestrator | 08:32:32.419 STDOUT terraform:  + created = (known after apply) 2025-02-10 08:32:32.419692 | orchestrator | 08:32:32.419 STDOUT terraform:  + flavor_id = (known after apply) 2025-02-10 08:32:32.419716 | orchestrator | 08:32:32.419 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-02-10 08:32:32.419734 | orchestrator | 08:32:32.419 STDOUT terraform:  + force_delete = false 2025-02-10 08:32:32.419784 | orchestrator | 08:32:32.419 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.419802 | orchestrator | 08:32:32.419 STDOUT terraform:  + image_id = (known after apply) 2025-02-10 08:32:32.419848 | orchestrator | 08:32:32.419 STDOUT terraform:  + image_name = (known after apply) 2025-02-10 08:32:32.419905 | orchestrator | 08:32:32.419 STDOUT terraform:  + key_pair = "testbed" 2025-02-10 08:32:32.419924 | orchestrator | 08:32:32.419 STDOUT terraform:  + name = "testbed-node-1" 2025-02-10 08:32:32.419938 | orchestrator | 08:32:32.419 STDOUT terraform:  + power_state = "active" 2025-02-10 08:32:32.419955 | orchestrator | 08:32:32.419 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.420003 | orchestrator | 08:32:32.419 STDOUT terraform:  + security_groups = (known after apply) 2025-02-10 08:32:32.420062 | orchestrator | 08:32:32.419 STDOUT terraform:  + stop_before_destroy = false 2025-02-10 08:32:32.420081 | orchestrator | 08:32:32.419 STDOUT terraform:  + updated = (known after apply) 2025-02-10 08:32:32.420098 | orchestrator | 08:32:32.420 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-02-10 08:32:32.420115 | orchestrator | 08:32:32.420 STDOUT terraform:  + block_device { 2025-02-10 08:32:32.420132 | orchestrator | 08:32:32.420 STDOUT terraform:  + boot_index = 0 2025-02-10 08:32:32.420181 | orchestrator | 08:32:32.420 STDOUT terraform:  + delete_on_termination = false 2025-02-10 08:32:32.420235 | orchestrator | 08:32:32.420 STDOUT terraform:  + destination_type = "volume" 2025-02-10 08:32:32.420254 | orchestrator | 08:32:32.420 STDOUT terraform:  + multiattach = false 2025-02-10 08:32:32.420310 | orchestrator | 08:32:32.420 STDOUT terraform:  + source_type = "volume" 2025-02-10 08:32:32.420331 | orchestrator | 08:32:32.420 STDOUT terraform:  + uuid = (known after apply) 2025-02-10 08:32:32.420384 | orchestrator | 08:32:32.420 STDOUT terraform:  } 2025-02-10 08:32:32.420399 | orchestrator | 08:32:32.420 STDOUT terraform:  + network { 2025-02-10 08:32:32.420412 | orchestrator | 08:32:32.420 STDOUT terraform:  + access_network = false 2025-02-10 08:32:32.420430 | orchestrator | 08:32:32.420 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-02-10 08:32:32.420471 | orchestrator | 08:32:32.420 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-02-10 08:32:32.420494 | orchestrator | 08:32:32.420 STDOUT terraform:  + mac = (known after apply) 2025-02-10 08:32:32.420513 | orchestrator | 08:32:32.420 STDOUT terraform:  + name = (known after apply) 2025-02-10 08:32:32.420528 | orchestrator | 08:32:32.420 STDOUT terraform:  + port = (known after apply) 2025-02-10 08:32:32.420541 | orchestrator | 08:32:32.420 STDOUT terraform:  + uuid = (known after apply) 2025-02-10 08:32:32.420556 | orchestrator | 08:32:32.420 STDOUT terraform:  } 2025-02-10 08:32:32.420624 | orchestrator | 08:32:32.420 STDOUT terraform:  } 2025-02-10 08:32:32.420645 | orchestrator | 08:32:32.420 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-02-10 08:32:32.420660 | orchestrator | 08:32:32.420 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-02-10 08:32:32.420678 | orchestrator | 08:32:32.420 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-02-10 08:32:32.420696 | orchestrator | 08:32:32.420 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-02-10 08:32:32.420797 | orchestrator | 08:32:32.420 STDOUT terraform:  + all_metadata = (known after apply) 2025-02-10 08:32:32.420815 | orchestrator | 08:32:32.420 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:32.420832 | orchestrator | 08:32:32.420 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:32.420846 | orchestrator | 08:32:32.420 STDOUT terraform:  + config_drive = true 2025-02-10 08:32:32.420862 | orchestrator | 08:32:32.420 STDOUT terraform:  + created = (known after apply) 2025-02-10 08:32:32.420970 | orchestrator | 08:32:32.420 STDOUT terraform:  + flavor_id = (known after apply) 2025-02-10 08:32:32.420989 | orchestrator | 08:32:32.420 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-02-10 08:32:32.421035 | orchestrator | 08:32:32.420 STDOUT terraform:  + force_delete = false 2025-02-10 08:32:32.421054 | orchestrator | 08:32:32.421 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.421099 | orchestrator | 08:32:32.421 STDOUT terraform:  + image_id = (known after apply) 2025-02-10 08:32:32.421117 | orchestrator | 08:32:32.421 STDOUT terraform:  + image_name = (known after apply) 2025-02-10 08:32:32.421162 | orchestrator | 08:32:32.421 STDOUT terraform:  + key_pair = "testbed" 2025-02-10 08:32:32.421177 | orchestrator | 08:32:32.421 STDOUT terraform:  + name = "testbed-node-2" 2025-02-10 08:32:32.421227 | orchestrator | 08:32:32.421 STDOUT terraform:  + power_state = "active" 2025-02-10 08:32:32.421242 | orchestrator | 08:32:32.421 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.421296 | orchestrator | 08:32:32.421 STDOUT terraform:  + security_groups = (known after apply) 2025-02-10 08:32:32.421351 | orchestrator | 08:32:32.421 STDOUT terraform:  + stop_before_destroy = false 2025-02-10 08:32:32.421367 | orchestrator | 08:32:32.421 STDOUT terraform:  + updated = (known after apply) 2025-02-10 08:32:32.421418 | orchestrator | 08:32:32.421 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-02-10 08:32:32.421432 | orchestrator | 08:32:32.421 STDOUT terraform:  + block_device { 2025-02-10 08:32:32.421460 | orchestrator | 08:32:32.421 STDOUT terraform:  + boot_index = 0 2025-02-10 08:32:32.421499 | orchestrator | 08:32:32.421 STDOUT terraform:  + delete_on_termination = false 2025-02-10 08:32:32.421515 | orchestrator | 08:32:32.421 STDOUT terraform:  + destination_type = "volume" 2025-02-10 08:32:32.421553 | orchestrator | 08:32:32.421 STDOUT terraform:  + multiattach = false 2025-02-10 08:32:32.421569 | orchestrator | 08:32:32.421 STDOUT terraform:  + source_type = "volume" 2025-02-10 08:32:32.421600 | orchestrator | 08:32:32.421 STDOUT terraform:  + uuid = (known after apply) 2025-02-10 08:32:32.421616 | orchestrator | 08:32:32.421 STDOUT terraform:  } 2025-02-10 08:32:32.421630 | orchestrator | 08:32:32.421 STDOUT terraform:  + network { 2025-02-10 08:32:32.421644 | orchestrator | 08:32:32.421 STDOUT terraform:  + access_network = false 2025-02-10 08:32:32.421710 | orchestrator | 08:32:32.421 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-02-10 08:32:32.421726 | orchestrator | 08:32:32.421 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-02-10 08:32:32.421767 | orchestrator | 08:32:32.421 STDOUT terraform:  + mac = (known after apply) 2025-02-10 08:32:32.421783 | orchestrator | 08:32:32.421 STDOUT terraform:  + name = (known after apply) 2025-02-10 08:32:32.421836 | orchestrator | 08:32:32.421 STDOUT terraform:  + port = (known after apply) 2025-02-10 08:32:32.421853 | orchestrator | 08:32:32.421 STDOUT terraform:  + uuid = (known after apply) 2025-02-10 08:32:32.421868 | orchestrator | 08:32:32.421 STDOUT terraform:  } 2025-02-10 08:32:32.421934 | orchestrator | 08:32:32.421 STDOUT terraform:  } 2025-02-10 08:32:32.421952 | orchestrator | 08:32:32.421 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-02-10 08:32:32.421966 | orchestrator | 08:32:32.421 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-02-10 08:32:32.422045 | orchestrator | 08:32:32.421 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-02-10 08:32:32.422064 | orchestrator | 08:32:32.421 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-02-10 08:32:32.422115 | orchestrator | 08:32:32.422 STDOUT terraform:  + all_metadata = (known after apply) 2025-02-10 08:32:32.422130 | orchestrator | 08:32:32.422 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:32.422180 | orchestrator | 08:32:32.422 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:32.422233 | orchestrator | 08:32:32.422 STDOUT terraform:  + config_drive = true 2025-02-10 08:32:32.422249 | orchestrator | 08:32:32.422 STDOUT terraform:  + created = (known after apply) 2025-02-10 08:32:32.422297 | orchestrator | 08:32:32.422 STDOUT terraform:  + flavor_id = (known after apply) 2025-02-10 08:32:32.422325 | orchestrator | 08:32:32.422 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-02-10 08:32:32.422337 | orchestrator | 08:32:32.422 STDOUT terraform:  + force_delete = false 2025-02-10 08:32:32.422350 | orchestrator | 08:32:32.422 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.422378 | orchestrator | 08:32:32.422 STDOUT terraform:  + image_id = (known after apply) 2025-02-10 08:32:32.422393 | orchestrator | 08:32:32.422 STDOUT terraform:  + image_name = (known after apply) 2025-02-10 08:32:32.422436 | orchestrator | 08:32:32.422 STDOUT terraform:  + key_pair = "testbed" 2025-02-10 08:32:32.422452 | orchestrator | 08:32:32.422 STDOUT terraform:  + name = "testbed-node-3" 2025-02-10 08:32:32.422466 | orchestrator | 08:32:32.422 STDOUT terraform:  + power_state = "active" 2025-02-10 08:32:32.422518 | orchestrator | 08:32:32.422 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.422534 | orchestrator | 08:32:32.422 STDOUT terraform:  + security_groups = (known after apply) 2025-02-10 08:32:32.422548 | orchestrator | 08:32:32.422 STDOUT terraform:  + stop_before_destroy = false 2025-02-10 08:32:32.422618 | orchestrator | 08:32:32.422 STDOUT terraform:  + updated = (known after apply) 2025-02-10 08:32:32.422676 | orchestrator | 08:32:32.422 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-02-10 08:32:32.422690 | orchestrator | 08:32:32.422 STDOUT terraform:  + block_device { 2025-02-10 08:32:32.422703 | orchestrator | 08:32:32.422 STDOUT terraform:  + boot_index = 0 2025-02-10 08:32:32.422717 | orchestrator | 08:32:32.422 STDOUT terraform:  + delete_on_termination = false 2025-02-10 08:32:32.422760 | orchestrator | 08:32:32.422 STDOUT terraform:  + destination_type = "volume" 2025-02-10 08:32:32.422775 | orchestrator | 08:32:32.422 STDOUT terraform:  + multiattach = false 2025-02-10 08:32:32.422789 | orchestrator | 08:32:32.422 STDOUT terraform:  + source_type = "volume" 2025-02-10 08:32:32.422850 | orchestrator | 08:32:32.422 STDOUT terraform:  + uuid = (known after apply) 2025-02-10 08:32:32.422890 | orchestrator | 08:32:32.422 STDOUT terraform:  } 2025-02-10 08:32:32.422903 | orchestrator | 08:32:32.422 STDOUT terraform:  + network { 2025-02-10 08:32:32.422917 | orchestrator | 08:32:32.422 STDOUT terraform:  + access_network = false 2025-02-10 08:32:32.422929 | orchestrator | 08:32:32.422 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-02-10 08:32:32.422942 | orchestrator | 08:32:32.422 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-02-10 08:32:32.422956 | orchestrator | 08:32:32.422 STDOUT terraform:  + mac = (known after apply) 2025-02-10 08:32:32.423004 | orchestrator | 08:32:32.422 STDOUT terraform:  + name = (known after apply) 2025-02-10 08:32:32.423019 | orchestrator | 08:32:32.422 STDOUT terraform:  + port = (known after apply) 2025-02-10 08:32:32.423060 | orchestrator | 08:32:32.423 STDOUT terraform:  + uuid = (known after apply) 2025-02-10 08:32:32.423126 | orchestrator | 08:32:32.423 STDOUT terraform:  } 2025-02-10 08:32:32.423139 | orchestrator | 08:32:32.423 STDOUT terraform:  } 2025-02-10 08:32:32.423154 | orchestrator | 08:32:32.423 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-02-10 08:32:32.423209 | orchestrator | 08:32:32.423 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-02-10 08:32:32.423224 | orchestrator | 08:32:32.423 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-02-10 08:32:32.423271 | orchestrator | 08:32:32.423 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-02-10 08:32:32.423287 | orchestrator | 08:32:32.423 STDOUT terraform:  + all_metadata = (known after apply) 2025-02-10 08:32:32.423336 | orchestrator | 08:32:32.423 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:32.423353 | orchestrator | 08:32:32.423 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:32.423365 | orchestrator | 08:32:32.423 STDOUT terraform:  + config_drive = true 2025-02-10 08:32:32.423378 | orchestrator | 08:32:32.423 STDOUT terraform:  + created = (known after apply) 2025-02-10 08:32:32.423392 | orchestrator | 08:32:32.423 STDOUT terraform:  + flavor_id = (known after apply) 2025-02-10 08:32:32.423440 | orchestrator | 08:32:32.423 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-02-10 08:32:32.423495 | orchestrator | 08:32:32.423 STDOUT terraform:  + force_delete = false 2025-02-10 08:32:32.423510 | orchestrator | 08:32:32.423 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.423561 | orchestrator | 08:32:32.423 STDOUT terraform:  + image_id = (known after apply) 2025-02-10 08:32:32.423641 | orchestrator | 08:32:32.423 STDOUT terraform:  + image_name = (known after apply) 2025-02-10 08:32:32.423659 | orchestrator | 08:32:32.423 STDOUT terraform:  + key_pair = "testbed" 2025-02-10 08:32:32.423671 | orchestrator | 08:32:32.423 STDOUT terraform:  + name = "testbed-node-4" 2025-02-10 08:32:32.423684 | orchestrator | 08:32:32.423 STDOUT terraform:  + power_state = "active" 2025-02-10 08:32:32.423724 | orchestrator | 08:32:32.423 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.423739 | orchestrator | 08:32:32.423 STDOUT terraform:  + security_groups = (known after apply) 2025-02-10 08:32:32.423777 | orchestrator | 08:32:32.423 STDOUT terraform:  + stop_before_destroy = false 2025-02-10 08:32:32.423793 | orchestrator | 08:32:32.423 STDOUT terraform:  + updated = (known after apply) 2025-02-10 08:32:32.427760 | orchestrator | 08:32:32.423 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-02-10 08:32:32.427818 | orchestrator | 08:32:32.423 STDOUT terraform:  + block_device { 2025-02-10 08:32:32.427829 | orchestrator | 08:32:32.423 STDOUT terraform:  + boot_index = 0 2025-02-10 08:32:32.427840 | orchestrator | 08:32:32.423 STDOUT terraform:  + delete_on_termination = false 2025-02-10 08:32:32.427850 | orchestrator | 08:32:32.423 STDOUT terraform:  + destination_type = "volume" 2025-02-10 08:32:32.427858 | orchestrator | 08:32:32.423 STDOUT terraform:  + multiattach = false 2025-02-10 08:32:32.427867 | orchestrator | 08:32:32.423 STDOUT terraform:  + source_type = "volume" 2025-02-10 08:32:32.427877 | orchestrator | 08:32:32.423 STDOUT terraform:  + uuid = (known after apply) 2025-02-10 08:32:32.427886 | orchestrator | 08:32:32.424 STDOUT terraform:  } 2025-02-10 08:32:32.427896 | orchestrator | 08:32:32.424 STDOUT terraform:  + network { 2025-02-10 08:32:32.427905 | orchestrator | 08:32:32.424 STDOUT terraform:  + access_network = false 2025-02-10 08:32:32.427937 | orchestrator | 08:32:32.424 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-02-10 08:32:32.427946 | orchestrator | 08:32:32.424 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-02-10 08:32:32.427955 | orchestrator | 08:32:32.424 STDOUT terraform:  + mac = (known after apply) 2025-02-10 08:32:32.427964 | orchestrator | 08:32:32.424 STDOUT terraform:  + name = (known after apply) 2025-02-10 08:32:32.427973 | orchestrator | 08:32:32.424 STDOUT terraform:  + port = (known after apply) 2025-02-10 08:32:32.427982 | orchestrator | 08:32:32.424 STDOUT terraform:  + uuid = (known after apply) 2025-02-10 08:32:32.427991 | orchestrator | 08:32:32.424 STDOUT terraform:  } 2025-02-10 08:32:32.428001 | orchestrator | 08:32:32.424 STDOUT terraform:  } 2025-02-10 08:32:32.428010 | orchestrator | 08:32:32.424 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-02-10 08:32:32.428019 | orchestrator | 08:32:32.424 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-02-10 08:32:32.428028 | orchestrator | 08:32:32.424 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-02-10 08:32:32.428037 | orchestrator | 08:32:32.424 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-02-10 08:32:32.428046 | orchestrator | 08:32:32.424 STDOUT terraform:  + all_metadata = (known after apply) 2025-02-10 08:32:32.428055 | orchestrator | 08:32:32.424 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:32.428063 | orchestrator | 08:32:32.424 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:32.428073 | orchestrator | 08:32:32.424 STDOUT terraform:  + config_drive = true 2025-02-10 08:32:32.428083 | orchestrator | 08:32:32.424 STDOUT terraform:  + created = (known after apply) 2025-02-10 08:32:32.428092 | orchestrator | 08:32:32.424 STDOUT terraform:  + flavor_id = (known after apply) 2025-02-10 08:32:32.428111 | orchestrator | 08:32:32.424 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-02-10 08:32:32.428120 | orchestrator | 08:32:32.424 STDOUT terraform:  + force_delete = false 2025-02-10 08:32:32.428129 | orchestrator | 08:32:32.424 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.428138 | orchestrator | 08:32:32.424 STDOUT terraform:  + image_id = (known after apply) 2025-02-10 08:32:32.428147 | orchestrator | 08:32:32.424 STDOUT terraform:  + image_name = (known after apply) 2025-02-10 08:32:32.428156 | orchestrator | 08:32:32.424 STDOUT terraform:  + key_pair = "testbed" 2025-02-10 08:32:32.428165 | orchestrator | 08:32:32.424 STDOUT terraform:  + name = "testbed-node-5" 2025-02-10 08:32:32.428174 | orchestrator | 08:32:32.424 STDOUT terraform:  + power_state = "active" 2025-02-10 08:32:32.428183 | orchestrator | 08:32:32.424 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.428203 | orchestrator | 08:32:32.424 STDOUT terraform:  + security_groups = (known after apply) 2025-02-10 08:32:32.428213 | orchestrator | 08:32:32.424 STDOUT terraform:  + stop_before_destroy = false 2025-02-10 08:32:32.428228 | orchestrator | 08:32:32.424 STDOUT terraform:  + updated = (known after apply) 2025-02-10 08:32:32.428237 | orchestrator | 08:32:32.424 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-02-10 08:32:32.428246 | orchestrator | 08:32:32.424 STDOUT terraform:  + block_device { 2025-02-10 08:32:32.428255 | orchestrator | 08:32:32.424 STDOUT terraform:  + boot_index = 0 2025-02-10 08:32:32.428263 | orchestrator | 08:32:32.425 STDOUT terraform:  + delete_on_termination = false 2025-02-10 08:32:32.428273 | orchestrator | 08:32:32.425 STDOUT terraform:  + destination_type = "volume" 2025-02-10 08:32:32.428281 | orchestrator | 08:32:32.425 STDOUT terraform:  + multiattach = false 2025-02-10 08:32:32.428290 | orchestrator | 08:32:32.425 STDOUT terraform:  + source_type = "volume" 2025-02-10 08:32:32.428299 | orchestrator | 08:32:32.425 STDOUT terraform:  + uuid = (known after apply) 2025-02-10 08:32:32.428308 | orchestrator | 08:32:32.425 STDOUT terraform:  } 2025-02-10 08:32:32.428318 | orchestrator | 08:32:32.425 STDOUT terraform:  + network { 2025-02-10 08:32:32.428326 | orchestrator | 08:32:32.425 STDOUT terraform:  + access_network = false 2025-02-10 08:32:32.428338 | orchestrator | 08:32:32.425 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-02-10 08:32:32.428347 | orchestrator | 08:32:32.425 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-02-10 08:32:32.428356 | orchestrator | 08:32:32.425 STDOUT terraform:  + mac = (known after apply) 2025-02-10 08:32:32.428365 | orchestrator | 08:32:32.425 STDOUT terraform:  + name = (known after apply) 2025-02-10 08:32:32.428374 | orchestrator | 08:32:32.425 STDOUT terraform:  + port = (known after apply) 2025-02-10 08:32:32.428383 | orchestrator | 08:32:32.425 STDOUT terraform:  + uuid = (known after apply) 2025-02-10 08:32:32.428392 | orchestrator | 08:32:32.425 STDOUT terraform:  } 2025-02-10 08:32:32.428401 | orchestrator | 08:32:32.425 STDOUT terraform:  } 2025-02-10 08:32:32.428414 | orchestrator | 08:32:32.425 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-02-10 08:32:32.428424 | orchestrator | 08:32:32.425 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-02-10 08:32:32.428432 | orchestrator | 08:32:32.425 STDOUT terraform:  + fingerprint = (known after apply) 2025-02-10 08:32:32.428442 | orchestrator | 08:32:32.425 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.428451 | orchestrator | 08:32:32.425 STDOUT terraform:  + name = "testbed" 2025-02-10 08:32:32.428460 | orchestrator | 08:32:32.425 STDOUT terraform:  + private_key = (sensitive value) 2025-02-10 08:32:32.428468 | orchestrator | 08:32:32.425 STDOUT terraform:  + public_key = (known after apply) 2025-02-10 08:32:32.428477 | orchestrator | 08:32:32.425 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.428486 | orchestrator | 08:32:32.425 STDOUT terraform:  + user_id = (known after apply) 2025-02-10 08:32:32.428495 | orchestrator | 08:32:32.425 STDOUT terraform:  } 2025-02-10 08:32:32.428504 | orchestrator | 08:32:32.425 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-02-10 08:32:32.428519 | orchestrator | 08:32:32.425 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-10 08:32:32.428528 | orchestrator | 08:32:32.425 STDOUT terraform:  + device = (known after apply) 2025-02-10 08:32:32.428537 | orchestrator | 08:32:32.425 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.428546 | orchestrator | 08:32:32.425 STDOUT terraform:  + instance_id = (known after apply) 2025-02-10 08:32:32.428555 | orchestrator | 08:32:32.425 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.428568 | orchestrator | 08:32:32.425 STDOUT terraform:  + volume_id = (known after apply) 2025-02-10 08:32:32.428631 | orchestrator | 08:32:32.425 STDOUT terraform:  } 2025-02-10 08:32:32.428644 | orchestrator | 08:32:32.425 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-02-10 08:32:32.428654 | orchestrator | 08:32:32.425 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-10 08:32:32.428663 | orchestrator | 08:32:32.425 STDOUT terraform:  + device = (known after apply) 2025-02-10 08:32:32.428672 | orchestrator | 08:32:32.425 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.428680 | orchestrator | 08:32:32.425 STDOUT terraform:  + instance_id = (known after apply) 2025-02-10 08:32:32.428688 | orchestrator | 08:32:32.425 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.428701 | orchestrator | 08:32:32.425 STDOUT terraform:  + volume_id = (known after apply) 2025-02-10 08:32:32.428711 | orchestrator | 08:32:32.426 STDOUT terraform:  } 2025-02-10 08:32:32.428719 | orchestrator | 08:32:32.426 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-02-10 08:32:32.428727 | orchestrator | 08:32:32.426 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-10 08:32:32.428735 | orchestrator | 08:32:32.426 STDOUT terraform:  + device = (known after apply) 2025-02-10 08:32:32.428743 | orchestrator | 08:32:32.426 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.428751 | orchestrator | 08:32:32.426 STDOUT terraform:  + instance_id = (known after apply) 2025-02-10 08:32:32.428760 | orchestrator | 08:32:32.426 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.428768 | orchestrator | 08:32:32.426 STDOUT terraform:  + volume_id = (known after apply) 2025-02-10 08:32:32.428776 | orchestrator | 08:32:32.426 STDOUT terraform:  } 2025-02-10 08:32:32.428784 | orchestrator | 08:32:32.426 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-02-10 08:32:32.428793 | orchestrator | 08:32:32.426 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-10 08:32:32.428801 | orchestrator | 08:32:32.426 STDOUT terraform:  + device = (known after apply) 2025-02-10 08:32:32.428810 | orchestrator | 08:32:32.426 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.428818 | orchestrator | 08:32:32.426 STDOUT terraform:  + instance_id = (known after apply) 2025-02-10 08:32:32.428826 | orchestrator | 08:32:32.426 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.428840 | orchestrator | 08:32:32.426 STDOUT terraform:  + volume_id = (known after apply) 2025-02-10 08:32:32.428849 | orchestrator | 08:32:32.426 STDOUT terraform:  } 2025-02-10 08:32:32.428858 | orchestrator | 08:32:32.426 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-02-10 08:32:32.428866 | orchestrator | 08:32:32.426 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-10 08:32:32.428875 | orchestrator | 08:32:32.426 STDOUT terraform:  + device = (known after apply) 2025-02-10 08:32:32.428883 | orchestrator | 08:32:32.426 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.428891 | orchestrator | 08:32:32.426 STDOUT terraform:  + instance_id = (known after apply) 2025-02-10 08:32:32.428899 | orchestrator | 08:32:32.426 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.428907 | orchestrator | 08:32:32.426 STDOUT terraform:  + volume_id = (known after apply) 2025-02-10 08:32:32.428916 | orchestrator | 08:32:32.426 STDOUT terraform:  } 2025-02-10 08:32:32.428924 | orchestrator | 08:32:32.426 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-02-10 08:32:32.428933 | orchestrator | 08:32:32.426 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-10 08:32:32.428941 | orchestrator | 08:32:32.427 STDOUT terraform:  + device = (known after apply) 2025-02-10 08:32:32.428956 | orchestrator | 08:32:32.427 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.428964 | orchestrator | 08:32:32.427 STDOUT terraform:  + instance_id = (known after apply) 2025-02-10 08:32:32.428973 | orchestrator | 08:32:32.427 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.428981 | orchestrator | 08:32:32.427 STDOUT terraform:  + volume_id = (known after apply) 2025-02-10 08:32:32.428989 | orchestrator | 08:32:32.427 STDOUT terraform:  } 2025-02-10 08:32:32.428998 | orchestrator | 08:32:32.427 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-02-10 08:32:32.429011 | orchestrator | 08:32:32.427 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-10 08:32:32.429020 | orchestrator | 08:32:32.427 STDOUT terraform:  + device = (known after apply) 2025-02-10 08:32:32.429028 | orchestrator | 08:32:32.427 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.429036 | orchestrator | 08:32:32.427 STDOUT terraform:  + instance_id = (known after apply) 2025-02-10 08:32:32.429045 | orchestrator | 08:32:32.427 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.429053 | orchestrator | 08:32:32.427 STDOUT terraform:  + volume_id = (known after apply) 2025-02-10 08:32:32.429061 | orchestrator | 08:32:32.427 STDOUT terraform:  } 2025-02-10 08:32:32.429070 | orchestrator | 08:32:32.427 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-02-10 08:32:32.429078 | orchestrator | 08:32:32.427 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-10 08:32:32.429086 | orchestrator | 08:32:32.427 STDOUT terraform:  + device = (known after apply) 2025-02-10 08:32:32.429100 | orchestrator | 08:32:32.427 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.429108 | orchestrator | 08:32:32.427 STDOUT terraform:  + instance_id = (known after apply) 2025-02-10 08:32:32.429155 | orchestrator | 08:32:32.427 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.429165 | orchestrator | 08:32:32.427 STDOUT terraform:  + volume_id = (known after apply) 2025-02-10 08:32:32.429173 | orchestrator | 08:32:32.427 STDOUT terraform:  } 2025-02-10 08:32:32.429181 | orchestrator | 08:32:32.427 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-02-10 08:32:32.429190 | orchestrator | 08:32:32.427 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-10 08:32:32.429198 | orchestrator | 08:32:32.427 STDOUT terraform:  + device = (known after apply) 2025-02-10 08:32:32.429206 | orchestrator | 08:32:32.427 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.429214 | orchestrator | 08:32:32.427 STDOUT terraform:  + instance_id = (known after apply) 2025-02-10 08:32:32.429222 | orchestrator | 08:32:32.427 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.429231 | orchestrator | 08:32:32.427 STDOUT terraform:  + volume_id = (known after apply) 2025-02-10 08:32:32.429239 | orchestrator | 08:32:32.427 STDOUT terraform:  } 2025-02-10 08:32:32.429247 | orchestrator | 08:32:32.427 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[9] will be created 2025-02-10 08:32:32.429256 | orchestrator | 08:32:32.428 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-10 08:32:32.429264 | orchestrator | 08:32:32.428 STDOUT terraform:  + device = (known after apply) 2025-02-10 08:32:32.429273 | orchestrator | 08:32:32.428 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.429281 | orchestrator | 08:32:32.428 STDOUT terraform:  + instance_id = (known after apply) 2025-02-10 08:32:32.429289 | orchestrator | 08:32:32.428 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.429297 | orchestrator | 08:32:32.428 STDOUT terraform:  + volume_id = (known after apply) 2025-02-10 08:32:32.429305 | orchestrator | 08:32:32.428 STDOUT terraform:  } 2025-02-10 08:32:32.429319 | orchestrator | 08:32:32.428 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[10] will be created 2025-02-10 08:32:32.429357 | orchestrator | 08:32:32.428 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-10 08:32:32.429366 | orchestrator | 08:32:32.428 STDOUT terraform:  + device = (known after apply) 2025-02-10 08:32:32.429376 | orchestrator | 08:32:32.428 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.429384 | orchestrator | 08:32:32.428 STDOUT terraform:  + instance_id = (known after apply) 2025-02-10 08:32:32.429393 | orchestrator | 08:32:32.428 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.429401 | orchestrator | 08:32:32.428 STDOUT terraform:  + volume_id = (known after apply) 2025-02-10 08:32:32.429409 | orchestrator | 08:32:32.428 STDOUT terraform:  } 2025-02-10 08:32:32.429417 | orchestrator | 08:32:32.428 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[11] will be created 2025-02-10 08:32:32.429431 | orchestrator | 08:32:32.428 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-10 08:32:32.429439 | orchestrator | 08:32:32.428 STDOUT terraform:  + device = (known after apply) 2025-02-10 08:32:32.429448 | orchestrator | 08:32:32.428 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.429456 | orchestrator | 08:32:32.428 STDOUT terraform:  + instance_id = (known after apply) 2025-02-10 08:32:32.429464 | orchestrator | 08:32:32.428 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.429472 | orchestrator | 08:32:32.428 STDOUT terraform:  + volume_id = (known after apply) 2025-02-10 08:32:32.429480 | orchestrator | 08:32:32.428 STDOUT terraform:  } 2025-02-10 08:32:32.429489 | orchestrator | 08:32:32.428 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[12] will be created 2025-02-10 08:32:32.429497 | orchestrator | 08:32:32.428 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-10 08:32:32.429505 | orchestrator | 08:32:32.428 STDOUT terraform:  + device = (known after apply) 2025-02-10 08:32:32.429514 | orchestrator | 08:32:32.428 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.429522 | orchestrator | 08:32:32.428 STDOUT terraform:  + instance_id = (known after apply) 2025-02-10 08:32:32.429530 | orchestrator | 08:32:32.428 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.429539 | orchestrator | 08:32:32.428 STDOUT terraform:  + volume_id = (known after apply) 2025-02-10 08:32:32.429547 | orchestrator | 08:32:32.428 STDOUT terraform:  } 2025-02-10 08:32:32.429555 | orchestrator | 08:32:32.428 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[13] will be created 2025-02-10 08:32:32.429564 | orchestrator | 08:32:32.428 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-10 08:32:32.429572 | orchestrator | 08:32:32.429 STDOUT terraform:  + device = (known after apply) 2025-02-10 08:32:32.429605 | orchestrator | 08:32:32.429 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.429614 | orchestrator | 08:32:32.429 STDOUT terraform:  + instance_id = (known after apply) 2025-02-10 08:32:32.429622 | orchestrator | 08:32:32.429 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.429631 | orchestrator | 08:32:32.429 STDOUT terraform:  + volume_id = (known after apply) 2025-02-10 08:32:32.429639 | orchestrator | 08:32:32.429 STDOUT terraform:  } 2025-02-10 08:32:32.429648 | orchestrator | 08:32:32.429 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[14] will be created 2025-02-10 08:32:32.429656 | orchestrator | 08:32:32.429 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-10 08:32:32.429664 | orchestrator | 08:32:32.429 STDOUT terraform:  + device = (known after apply) 2025-02-10 08:32:32.429672 | orchestrator | 08:32:32.429 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.429681 | orchestrator | 08:32:32.429 STDOUT terraform:  + instance_id = (known after apply) 2025-02-10 08:32:32.429693 | orchestrator | 08:32:32.429 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.429751 | orchestrator | 08:32:32.429 STDOUT terraform:  + volume_id = (known after apply) 2025-02-10 08:32:32.429764 | orchestrator | 08:32:32.429 STDOUT terraform:  } 2025-02-10 08:32:32.429773 | orchestrator | 08:32:32.429 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[15] will be created 2025-02-10 08:32:32.429781 | orchestrator | 08:32:32.429 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-10 08:32:32.429789 | orchestrator | 08:32:32.429 STDOUT terraform:  + device = (known after apply) 2025-02-10 08:32:32.429798 | orchestrator | 08:32:32.429 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.429806 | orchestrator | 08:32:32.429 STDOUT terraform:  + instance_id = (known after apply) 2025-02-10 08:32:32.429814 | orchestrator | 08:32:32.429 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.429823 | orchestrator | 08:32:32.429 STDOUT terraform:  + volume_id = (known after apply) 2025-02-10 08:32:32.429831 | orchestrator | 08:32:32.429 STDOUT terraform:  } 2025-02-10 08:32:32.429840 | orchestrator | 08:32:32.429 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[16] will be created 2025-02-10 08:32:32.429852 | orchestrator | 08:32:32.429 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-10 08:32:32.429880 | orchestrator | 08:32:32.429 STDOUT terraform:  + device = (known after apply) 2025-02-10 08:32:32.429889 | orchestrator | 08:32:32.429 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.429897 | orchestrator | 08:32:32.429 STDOUT terraform:  + instance_id = (known after apply) 2025-02-10 08:32:32.429906 | orchestrator | 08:32:32.429 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.429917 | orchestrator | 08:32:32.429 STDOUT terraform:  + volume_id = (known after apply) 2025-02-10 08:32:32.429947 | orchestrator | 08:32:32.429 STDOUT terraform:  } 2025-02-10 08:32:32.429959 | orchestrator | 08:32:32.429 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[17] will be created 2025-02-10 08:32:32.430000 | orchestrator | 08:32:32.429 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-10 08:32:32.430084 | orchestrator | 08:32:32.429 STDOUT terraform:  + device = (known after apply) 2025-02-10 08:32:32.430134 | orchestrator | 08:32:32.430 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.430146 | orchestrator | 08:32:32.430 STDOUT terraform:  + instance_id = (known after apply) 2025-02-10 08:32:32.430155 | orchestrator | 08:32:32.430 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.430166 | orchestrator | 08:32:32.430 STDOUT terraform:  + volume_id = (known after apply) 2025-02-10 08:32:32.430228 | orchestrator | 08:32:32.430 STDOUT terraform:  } 2025-02-10 08:32:32.430240 | orchestrator | 08:32:32.430 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-02-10 08:32:32.430287 | orchestrator | 08:32:32.430 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-02-10 08:32:32.430319 | orchestrator | 08:32:32.430 STDOUT terraform:  + fixed_ip = (known after apply) 2025-02-10 08:32:32.430337 | orchestrator | 08:32:32.430 STDOUT terraform:  + floating_ip = (known after apply) 2025-02-10 08:32:32.430373 | orchestrator | 08:32:32.430 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.430405 | orchestrator | 08:32:32.430 STDOUT terraform:  + port_id = (known after apply) 2025-02-10 08:32:32.430444 | orchestrator | 08:32:32.430 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.430493 | orchestrator | 08:32:32.430 STDOUT terraform:  } 2025-02-10 08:32:32.430505 | orchestrator | 08:32:32.430 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-02-10 08:32:32.430543 | orchestrator | 08:32:32.430 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-02-10 08:32:32.430555 | orchestrator | 08:32:32.430 STDOUT terraform:  + address = (known after apply) 2025-02-10 08:32:32.430618 | orchestrator | 08:32:32.430 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:32.430650 | orchestrator | 08:32:32.430 STDOUT terraform:  + dns_domain = (known after apply) 2025-02-10 08:32:32.430678 | orchestrator | 08:32:32.430 STDOUT terraform:  + dns_name = (known after apply) 2025-02-10 08:32:32.430708 | orchestrator | 08:32:32.430 STDOUT terraform:  + fixed_ip = (known after apply) 2025-02-10 08:32:32.430737 | orchestrator | 08:32:32.430 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.430749 | orchestrator | 08:32:32.430 STDOUT terraform:  + pool = "public" 2025-02-10 08:32:32.430780 | orchestrator | 08:32:32.430 STDOUT terraform:  + port_id = (known after apply) 2025-02-10 08:32:32.430810 | orchestrator | 08:32:32.430 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.430822 | orchestrator | 08:32:32.430 STDOUT terraform:  + subnet_id = (known after apply) 2025-02-10 08:32:32.430861 | orchestrator | 08:32:32.430 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:32.430910 | orchestrator | 08:32:32.430 STDOUT terraform:  } 2025-02-10 08:32:32.430921 | orchestrator | 08:32:32.430 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-02-10 08:32:32.430959 | orchestrator | 08:32:32.430 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-02-10 08:32:32.431000 | orchestrator | 08:32:32.430 STDOUT terraform:  + admin_state_up = (known after apply) 2025-02-10 08:32:32.431038 | orchestrator | 08:32:32.430 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:32.431049 | orchestrator | 08:32:32.431 STDOUT terraform:  + availability_zone_hints = [ 2025-02-10 08:32:32.431060 | orchestrator | 08:32:32.431 STDOUT terraform:  + "nova", 2025-02-10 08:32:32.431072 | orchestrator | 08:32:32.431 STDOUT terraform:  ] 2025-02-10 08:32:32.431144 | orchestrator | 08:32:32.431 STDOUT terraform:  + dns_domain = (known after apply) 2025-02-10 08:32:32.431181 | orchestrator | 08:32:32.431 STDOUT terraform:  + external = (known after apply) 2025-02-10 08:32:32.431220 | orchestrator | 08:32:32.431 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.431258 | orchestrator | 08:32:32.431 STDOUT terraform:  + mtu = (known after apply) 2025-02-10 08:32:32.431297 | orchestrator | 08:32:32.431 STDOUT terraform:  + name = "net-testbed-management" 2025-02-10 08:32:32.431335 | orchestrator | 08:32:32.431 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-02-10 08:32:32.431374 | orchestrator | 08:32:32.431 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-02-10 08:32:32.431412 | orchestrator | 08:32:32.431 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.431451 | orchestrator | 08:32:32.431 STDOUT terraform:  + shared = (known after apply) 2025-02-10 08:32:32.431488 | orchestrator | 08:32:32.431 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:32.431532 | orchestrator | 08:32:32.431 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-02-10 08:32:32.431544 | orchestrator | 08:32:32.431 STDOUT terraform:  + segments (known after apply) 2025-02-10 08:32:32.431555 | orchestrator | 08:32:32.431 STDOUT terraform:  } 2025-02-10 08:32:32.431631 | orchestrator | 08:32:32.431 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-02-10 08:32:32.431679 | orchestrator | 08:32:32.431 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-02-10 08:32:32.431717 | orchestrator | 08:32:32.431 STDOUT terraform:  + admin_state_up = (known after apply) 2025-02-10 08:32:32.431754 | orchestrator | 08:32:32.431 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-02-10 08:32:32.431792 | orchestrator | 08:32:32.431 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-02-10 08:32:32.431830 | orchestrator | 08:32:32.431 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:32.431868 | orchestrator | 08:32:32.431 STDOUT terraform:  + device_id = (known after apply) 2025-02-10 08:32:32.431905 | orchestrator | 08:32:32.431 STDOUT terraform:  + device_owner = (known after apply) 2025-02-10 08:32:32.431943 | orchestrator | 08:32:32.431 STDOUT terraform:  + dns_assignment = (known after apply) 2025-02-10 08:32:32.431983 | orchestrator | 08:32:32.431 STDOUT terraform:  + dns_name = (known after apply) 2025-02-10 08:32:32.432018 | orchestrator | 08:32:32.431 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.432057 | orchestrator | 08:32:32.432 STDOUT terraform:  + mac_address = (known after apply) 2025-02-10 08:32:32.432098 | orchestrator | 08:32:32.432 STDOUT terraform:  + network_id = (known after apply) 2025-02-10 08:32:32.432131 | orchestrator | 08:32:32.432 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-02-10 08:32:32.432171 | orchestrator | 08:32:32.432 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-02-10 08:32:32.432241 | orchestrator | 08:32:32.432 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.432279 | orchestrator | 08:32:32.432 STDOUT terraform:  + security_group_ids = (known after apply) 2025-02-10 08:32:32.432317 | orchestrator | 08:32:32.432 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:32.432329 | orchestrator | 08:32:32.432 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:32.432367 | orchestrator | 08:32:32.432 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-02-10 08:32:32.432396 | orchestrator | 08:32:32.432 STDOUT terraform:  } 2025-02-10 08:32:32.432407 | orchestrator | 08:32:32.432 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:32.432419 | orchestrator | 08:32:32.432 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-02-10 08:32:32.432430 | orchestrator | 08:32:32.432 STDOUT terraform:  } 2025-02-10 08:32:32.432459 | orchestrator | 08:32:32.432 STDOUT terraform:  + binding (known after apply) 2025-02-10 08:32:32.432494 | orchestrator | 08:32:32.432 STDOUT terraform:  + fixed_ip { 2025-02-10 08:32:32.432507 | orchestrator | 08:32:32.432 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-02-10 08:32:32.432518 | orchestrator | 08:32:32.432 STDOUT terraform:  + subnet_id = (known after apply) 2025-02-10 08:32:32.432529 | orchestrator | 08:32:32.432 STDOUT terraform:  } 2025-02-10 08:32:32.432541 | orchestrator | 08:32:32.432 STDOUT terraform:  } 2025-02-10 08:32:32.432625 | orchestrator | 08:32:32.432 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-02-10 08:32:32.432643 | orchestrator | 08:32:32.432 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-02-10 08:32:32.432691 | orchestrator | 08:32:32.432 STDOUT terraform:  + admin_state_up = (known after apply) 2025-02-10 08:32:32.432728 | orchestrator | 08:32:32.432 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-02-10 08:32:32.432763 | orchestrator | 08:32:32.432 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-02-10 08:32:32.432801 | orchestrator | 08:32:32.432 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:32.432840 | orchestrator | 08:32:32.432 STDOUT terraform:  + device_id = (known after apply) 2025-02-10 08:32:32.432877 | orchestrator | 08:32:32.432 STDOUT terraform:  + device_owner = (known after apply) 2025-02-10 08:32:32.432915 | orchestrator | 08:32:32.432 STDOUT terraform:  + dns_assignment = (known after apply) 2025-02-10 08:32:32.432954 | orchestrator | 08:32:32.432 STDOUT terraform:  + dns_name = (known after apply) 2025-02-10 08:32:32.432993 | orchestrator | 08:32:32.432 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.433027 | orchestrator | 08:32:32.432 STDOUT terraform:  + mac_address = (known after apply) 2025-02-10 08:32:32.433064 | orchestrator | 08:32:32.433 STDOUT terraform:  + network_id = (known after apply) 2025-02-10 08:32:32.433101 | orchestrator | 08:32:32.433 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-02-10 08:32:32.433138 | orchestrator | 08:32:32.433 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-02-10 08:32:32.433176 | orchestrator | 08:32:32.433 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.433212 | orchestrator | 08:32:32.433 STDOUT terraform:  + security_group_ids = (known after apply) 2025-02-10 08:32:32.433250 | orchestrator | 08:32:32.433 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:32.433261 | orchestrator | 08:32:32.433 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:32.433299 | orchestrator | 08:32:32.433 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-02-10 08:32:32.433327 | orchestrator | 08:32:32.433 STDOUT terraform:  } 2025-02-10 08:32:32.433337 | orchestrator | 08:32:32.433 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:32.433351 | orchestrator | 08:32:32.433 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-02-10 08:32:32.433361 | orchestrator | 08:32:32.433 STDOUT terraform:  } 2025-02-10 08:32:32.433388 | orchestrator | 08:32:32.433 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:32.433420 | orchestrator | 08:32:32.433 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-02-10 08:32:32.433450 | orchestrator | 08:32:32.433 STDOUT terraform:  } 2025-02-10 08:32:32.433461 | orchestrator | 08:32:32.433 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:32.433472 | orchestrator | 08:32:32.433 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-02-10 08:32:32.433482 | orchestrator | 08:32:32.433 STDOUT terraform:  } 2025-02-10 08:32:32.433515 | orchestrator | 08:32:32.433 STDOUT terraform:  + binding (known after apply) 2025-02-10 08:32:32.433551 | orchestrator | 08:32:32.433 STDOUT terraform:  + fixed_ip { 2025-02-10 08:32:32.433563 | orchestrator | 08:32:32.433 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-02-10 08:32:32.433572 | orchestrator | 08:32:32.433 STDOUT terraform:  + subnet_id = (known after apply) 2025-02-10 08:32:32.433595 | orchestrator | 08:32:32.433 STDOUT terraform:  } 2025-02-10 08:32:32.433605 | orchestrator | 08:32:32.433 STDOUT terraform:  } 2025-02-10 08:32:32.433657 | orchestrator | 08:32:32.433 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-02-10 08:32:32.433702 | orchestrator | 08:32:32.433 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-02-10 08:32:32.433741 | orchestrator | 08:32:32.433 STDOUT terraform:  + admin_state_up = (known after apply) 2025-02-10 08:32:32.433781 | orchestrator | 08:32:32.433 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-02-10 08:32:32.433815 | orchestrator | 08:32:32.433 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-02-10 08:32:32.433856 | orchestrator | 08:32:32.433 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:32.433890 | orchestrator | 08:32:32.433 STDOUT terraform:  + device_id = (known after apply) 2025-02-10 08:32:32.433926 | orchestrator | 08:32:32.433 STDOUT terraform:  + device_owner = (known after apply) 2025-02-10 08:32:32.433964 | orchestrator | 08:32:32.433 STDOUT terraform:  + dns_assignment = (known after apply) 2025-02-10 08:32:32.434001 | orchestrator | 08:32:32.433 STDOUT terraform:  + dns_name = (known after apply) 2025-02-10 08:32:32.434068 | orchestrator | 08:32:32.433 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.434105 | orchestrator | 08:32:32.434 STDOUT terraform:  + mac_address = (known after apply) 2025-02-10 08:32:32.434144 | orchestrator | 08:32:32.434 STDOUT terraform:  + network_id = (known after apply) 2025-02-10 08:32:32.434180 | orchestrator | 08:32:32.434 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-02-10 08:32:32.434217 | orchestrator | 08:32:32.434 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-02-10 08:32:32.434256 | orchestrator | 08:32:32.434 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.434293 | orchestrator | 08:32:32.434 STDOUT terraform:  + security_group_ids = (known after apply) 2025-02-10 08:32:32.434332 | orchestrator | 08:32:32.434 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:32.434342 | orchestrator | 08:32:32.434 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:32.434379 | orchestrator | 08:32:32.434 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-02-10 08:32:32.434408 | orchestrator | 08:32:32.434 STDOUT terraform:  } 2025-02-10 08:32:32.434418 | orchestrator | 08:32:32.434 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:32.434428 | orchestrator | 08:32:32.434 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-02-10 08:32:32.434438 | orchestrator | 08:32:32.434 STDOUT terraform:  } 2025-02-10 08:32:32.434467 | orchestrator | 08:32:32.434 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:32.434499 | orchestrator | 08:32:32.434 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-02-10 08:32:32.434531 | orchestrator | 08:32:32.434 STDOUT terraform:  } 2025-02-10 08:32:32.434542 | orchestrator | 08:32:32.434 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:32.434552 | orchestrator | 08:32:32.434 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-02-10 08:32:32.434562 | orchestrator | 08:32:32.434 STDOUT terraform:  } 2025-02-10 08:32:32.434619 | orchestrator | 08:32:32.434 STDOUT terraform:  + binding (known after apply) 2025-02-10 08:32:32.434711 | orchestrator | 08:32:32.434 STDOUT terraform:  + fixed_ip { 2025-02-10 08:32:32.434755 | orchestrator | 08:32:32.434 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-02-10 08:32:32.434803 | orchestrator | 08:32:32.434 STDOUT terraform:  + subnet_id = (known after apply) 2025-02-10 08:32:32.434814 | orchestrator | 08:32:32.434 STDOUT terraform:  } 2025-02-10 08:32:32.434844 | orchestrator | 08:32:32.434 STDOUT terraform:  } 2025-02-10 08:32:32.434897 | orchestrator | 08:32:32.434 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-02-10 08:32:32.434944 | orchestrator | 08:32:32.434 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-02-10 08:32:32.434982 | orchestrator | 08:32:32.434 STDOUT terraform:  + admin_state_up = (known after apply) 2025-02-10 08:32:32.435022 | orchestrator | 08:32:32.434 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-02-10 08:32:32.435058 | orchestrator | 08:32:32.435 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-02-10 08:32:32.435096 | orchestrator | 08:32:32.435 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:32.435134 | orchestrator | 08:32:32.435 STDOUT terraform:  + device_id = (known after apply) 2025-02-10 08:32:32.435173 | orchestrator | 08:32:32.435 STDOUT terraform:  + device_owner = (known after apply) 2025-02-10 08:32:32.435211 | orchestrator | 08:32:32.435 STDOUT terraform:  + dns_assignment = (known after apply) 2025-02-10 08:32:32.435249 | orchestrator | 08:32:32.435 STDOUT terraform:  + dns_name = (known after apply) 2025-02-10 08:32:32.435287 | orchestrator | 08:32:32.435 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.435325 | orchestrator | 08:32:32.435 STDOUT terraform:  + mac_address = (known after apply) 2025-02-10 08:32:32.435364 | orchestrator | 08:32:32.435 STDOUT terraform:  + network_id = (known after apply) 2025-02-10 08:32:32.435402 | orchestrator | 08:32:32.435 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-02-10 08:32:32.435444 | orchestrator | 08:32:32.435 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-02-10 08:32:32.435483 | orchestrator | 08:32:32.435 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.435515 | orchestrator | 08:32:32.435 STDOUT terraform:  + security_group_ids = (known after apply) 2025-02-10 08:32:32.435554 | orchestrator | 08:32:32.435 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:32.435564 | orchestrator | 08:32:32.435 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:32.435636 | orchestrator | 08:32:32.435 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-02-10 08:32:32.435646 | orchestrator | 08:32:32.435 STDOUT terraform:  } 2025-02-10 08:32:32.435670 | orchestrator | 08:32:32.435 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:32.435703 | orchestrator | 08:32:32.435 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-02-10 08:32:32.435713 | orchestrator | 08:32:32.435 STDOUT terraform:  } 2025-02-10 08:32:32.435722 | orchestrator | 08:32:32.435 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:32.435762 | orchestrator | 08:32:32.435 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-02-10 08:32:32.435791 | orchestrator | 08:32:32.435 STDOUT terraform:  } 2025-02-10 08:32:32.435800 | orchestrator | 08:32:32.435 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:32.435809 | orchestrator | 08:32:32.435 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-02-10 08:32:32.435818 | orchestrator | 08:32:32.435 STDOUT terraform:  } 2025-02-10 08:32:32.435854 | orchestrator | 08:32:32.435 STDOUT terraform:  + binding (known after apply) 2025-02-10 08:32:32.435863 | orchestrator | 08:32:32.435 STDOUT terraform:  + fixed_ip { 2025-02-10 08:32:32.435895 | orchestrator | 08:32:32.435 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-02-10 08:32:32.435926 | orchestrator | 08:32:32.435 STDOUT terraform:  + subnet_id = (known after apply) 2025-02-10 08:32:32.435936 | orchestrator | 08:32:32.435 STDOUT terraform:  } 2025-02-10 08:32:32.435945 | orchestrator | 08:32:32.435 STDOUT terraform:  } 2025-02-10 08:32:32.435995 | orchestrator | 08:32:32.435 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-02-10 08:32:32.436042 | orchestrator | 08:32:32.435 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-02-10 08:32:32.436089 | orchestrator | 08:32:32.436 STDOUT terraform:  + admin_state_up = (known after apply) 2025-02-10 08:32:32.436127 | orchestrator | 08:32:32.436 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-02-10 08:32:32.436163 | orchestrator | 08:32:32.436 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-02-10 08:32:32.436213 | orchestrator | 08:32:32.436 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:32.436265 | orchestrator | 08:32:32.436 STDOUT terraform:  + device_id = (known after apply) 2025-02-10 08:32:32.436303 | orchestrator | 08:32:32.436 STDOUT terraform:  + device_owner = (known after apply) 2025-02-10 08:32:32.436340 | orchestrator | 08:32:32.436 STDOUT terraform:  + dns_assignment = (known after apply) 2025-02-10 08:32:32.436378 | orchestrator | 08:32:32.436 STDOUT terraform:  + dns_name = (known after apply) 2025-02-10 08:32:32.436418 | orchestrator | 08:32:32.436 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.436455 | orchestrator | 08:32:32.436 STDOUT terraform:  + mac_address = (known after apply) 2025-02-10 08:32:32.436493 | orchestrator | 08:32:32.436 STDOUT terraform:  + network_id = (known after apply) 2025-02-10 08:32:32.436527 | orchestrator | 08:32:32.436 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-02-10 08:32:32.436564 | orchestrator | 08:32:32.436 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-02-10 08:32:32.436612 | orchestrator | 08:32:32.436 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.436652 | orchestrator | 08:32:32.436 STDOUT terraform:  + security_group_ids = (known after apply) 2025-02-10 08:32:32.436694 | orchestrator | 08:32:32.436 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:32.436704 | orchestrator | 08:32:32.436 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:32.436740 | orchestrator | 08:32:32.436 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-02-10 08:32:32.436771 | orchestrator | 08:32:32.436 STDOUT terraform:  } 2025-02-10 08:32:32.436780 | orchestrator | 08:32:32.436 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:32.436789 | orchestrator | 08:32:32.436 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-02-10 08:32:32.436812 | orchestrator | 08:32:32.436 STDOUT terraform:  } 2025-02-10 08:32:32.436827 | orchestrator | 08:32:32.436 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:32.436856 | orchestrator | 08:32:32.436 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-02-10 08:32:32.436866 | orchestrator | 08:32:32.436 STDOUT terraform:  } 2025-02-10 08:32:32.436891 | orchestrator | 08:32:32.436 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:32.436922 | orchestrator | 08:32:32.436 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-02-10 08:32:32.436955 | orchestrator | 08:32:32.436 STDOUT terraform:  } 2025-02-10 08:32:32.436965 | orchestrator | 08:32:32.436 STDOUT terraform:  + binding (known after apply) 2025-02-10 08:32:32.436990 | orchestrator | 08:32:32.436 STDOUT terraform:  + fixed_ip { 2025-02-10 08:32:32.437000 | orchestrator | 08:32:32.436 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-02-10 08:32:32.437010 | orchestrator | 08:32:32.436 STDOUT terraform:  + subnet_id = (known after apply) 2025-02-10 08:32:32.437036 | orchestrator | 08:32:32.437 STDOUT terraform:  } 2025-02-10 08:32:32.437089 | orchestrator | 08:32:32.437 STDOUT terraform:  } 2025-02-10 08:32:32.437099 | orchestrator | 08:32:32.437 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-02-10 08:32:32.437137 | orchestrator | 08:32:32.437 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-02-10 08:32:32.437174 | orchestrator | 08:32:32.437 STDOUT terraform:  + admin_state_up = (known after apply) 2025-02-10 08:32:32.437213 | orchestrator | 08:32:32.437 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-02-10 08:32:32.437253 | orchestrator | 08:32:32.437 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-02-10 08:32:32.437288 | orchestrator | 08:32:32.437 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:32.437325 | orchestrator | 08:32:32.437 STDOUT terraform:  + device_id = (known after apply) 2025-02-10 08:32:32.437363 | orchestrator | 08:32:32.437 STDOUT terraform:  + device_owner = (known after apply) 2025-02-10 08:32:32.437400 | orchestrator | 08:32:32.437 STDOUT terraform:  + dns_assignment = (known after apply) 2025-02-10 08:32:32.437443 | orchestrator | 08:32:32.437 STDOUT terraform:  + dns_name = (known after apply) 2025-02-10 08:32:32.437476 | orchestrator | 08:32:32.437 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.437514 | orchestrator | 08:32:32.437 STDOUT terraform:  + mac_address = (known after apply) 2025-02-10 08:32:32.437552 | orchestrator | 08:32:32.437 STDOUT terraform:  + network_id = (known after apply) 2025-02-10 08:32:32.437600 | orchestrator | 08:32:32.437 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-02-10 08:32:32.437642 | orchestrator | 08:32:32.437 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-02-10 08:32:32.437682 | orchestrator | 08:32:32.437 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.437720 | orchestrator | 08:32:32.437 STDOUT terraform:  + security_group_ids = (known after apply) 2025-02-10 08:32:32.437756 | orchestrator | 08:32:32.437 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:32.437765 | orchestrator | 08:32:32.437 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:32.437807 | orchestrator | 08:32:32.437 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-02-10 08:32:32.437821 | orchestrator | 08:32:32.437 STDOUT terraform:  } 2025-02-10 08:32:32.437830 | orchestrator | 08:32:32.437 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:32.437868 | orchestrator | 08:32:32.437 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-02-10 08:32:32.437897 | orchestrator | 08:32:32.437 STDOUT terraform:  } 2025-02-10 08:32:32.437906 | orchestrator | 08:32:32.437 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:32.437922 | orchestrator | 08:32:32.437 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-02-10 08:32:32.437931 | orchestrator | 08:32:32.437 STDOUT terraform:  } 2025-02-10 08:32:32.437940 | orchestrator | 08:32:32.437 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:32.437979 | orchestrator | 08:32:32.437 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-02-10 08:32:32.438026 | orchestrator | 08:32:32.437 STDOUT terraform:  } 2025-02-10 08:32:32.438038 | orchestrator | 08:32:32.437 STDOUT terraform:  + binding (known after apply) 2025-02-10 08:32:32.438070 | orchestrator | 08:32:32.438 STDOUT terraform:  + fixed_ip { 2025-02-10 08:32:32.438078 | orchestrator | 08:32:32.438 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-02-10 08:32:32.438102 | orchestrator | 08:32:32.438 STDOUT terraform:  + subnet_id = (known after apply) 2025-02-10 08:32:32.438112 | orchestrator | 08:32:32.438 STDOUT terraform:  } 2025-02-10 08:32:32.438121 | orchestrator | 08:32:32.438 STDOUT terraform:  } 2025-02-10 08:32:32.438176 | orchestrator | 08:32:32.438 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-02-10 08:32:32.438228 | orchestrator | 08:32:32.438 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-02-10 08:32:32.438266 | orchestrator | 08:32:32.438 STDOUT terraform:  + admin_state_up = (known after apply) 2025-02-10 08:32:32.438304 | orchestrator | 08:32:32.438 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-02-10 08:32:32.438345 | orchestrator | 08:32:32.438 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-02-10 08:32:32.438383 | orchestrator | 08:32:32.438 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:32.438422 | orchestrator | 08:32:32.438 STDOUT terraform:  + device_id = (known after apply) 2025-02-10 08:32:32.438461 | orchestrator | 08:32:32.438 STDOUT terraform:  + device_owner = (known after apply) 2025-02-10 08:32:32.438499 | orchestrator | 08:32:32.438 STDOUT terraform:  + dns_assignment = (known after apply) 2025-02-10 08:32:32.438537 | orchestrator | 08:32:32.438 STDOUT terraform:  + dns_name = (known after apply) 2025-02-10 08:32:32.438577 | orchestrator | 08:32:32.438 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.438655 | orchestrator | 08:32:32.438 STDOUT terraform:  + mac_address = (known after apply) 2025-02-10 08:32:32.438699 | orchestrator | 08:32:32.438 STDOUT terraform:  + network_id = (known after apply) 2025-02-10 08:32:32.438735 | orchestrator | 08:32:32.438 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-02-10 08:32:32.438775 | orchestrator | 08:32:32.438 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-02-10 08:32:32.438814 | orchestrator | 08:32:32.438 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.438851 | orchestrator | 08:32:32.438 STDOUT terraform:  + security_group_ids = (known after apply) 2025-02-10 08:32:32.438888 | orchestrator | 08:32:32.438 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:32.438898 | orchestrator | 08:32:32.438 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:32.438937 | orchestrator | 08:32:32.438 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-02-10 08:32:32.438947 | orchestrator | 08:32:32.438 STDOUT terraform:  } 2025-02-10 08:32:32.438956 | orchestrator | 08:32:32.438 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:32.438999 | orchestrator | 08:32:32.438 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-02-10 08:32:32.439014 | orchestrator | 08:32:32.438 STDOUT terraform:  } 2025-02-10 08:32:32.439022 | orchestrator | 08:32:32.438 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:32.439055 | orchestrator | 08:32:32.439 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-02-10 08:32:32.439084 | orchestrator | 08:32:32.439 STDOUT terraform:  } 2025-02-10 08:32:32.439093 | orchestrator | 08:32:32.439 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:32.439118 | orchestrator | 08:32:32.439 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-02-10 08:32:32.439127 | orchestrator | 08:32:32.439 STDOUT terraform:  } 2025-02-10 08:32:32.439153 | orchestrator | 08:32:32.439 STDOUT terraform:  + binding (known after apply) 2025-02-10 08:32:32.439162 | orchestrator | 08:32:32.439 STDOUT terraform:  + fixed_ip { 2025-02-10 08:32:32.439194 | orchestrator | 08:32:32.439 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-02-10 08:32:32.439229 | orchestrator | 08:32:32.439 STDOUT terraform:  + subnet_id = (known after apply) 2025-02-10 08:32:32.439236 | orchestrator | 08:32:32.439 STDOUT terraform:  } 2025-02-10 08:32:32.439244 | orchestrator | 08:32:32.439 STDOUT terraform:  } 2025-02-10 08:32:32.439292 | orchestrator | 08:32:32.439 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-02-10 08:32:32.439346 | orchestrator | 08:32:32.439 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-02-10 08:32:32.439355 | orchestrator | 08:32:32.439 STDOUT terraform:  + force_destroy = false 2025-02-10 08:32:32.439392 | orchestrator | 08:32:32.439 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.439423 | orchestrator | 08:32:32.439 STDOUT terraform:  + port_id = (known after apply) 2025-02-10 08:32:32.439453 | orchestrator | 08:32:32.439 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.439483 | orchestrator | 08:32:32.439 STDOUT terraform:  + router_id = (known after apply) 2025-02-10 08:32:32.439514 | orchestrator | 08:32:32.439 STDOUT terraform:  + subnet_id = (known after apply) 2025-02-10 08:32:32.439558 | orchestrator | 08:32:32.439 STDOUT terraform:  } 2025-02-10 08:32:32.439568 | orchestrator | 08:32:32.439 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-02-10 08:32:32.439623 | orchestrator | 08:32:32.439 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-02-10 08:32:32.439667 | orchestrator | 08:32:32.439 STDOUT terraform:  + admin_state_up = (known after apply) 2025-02-10 08:32:32.439706 | orchestrator | 08:32:32.439 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:32.439733 | orchestrator | 08:32:32.439 STDOUT terraform:  + availability_zone_hints = [ 2025-02-10 08:32:32.439742 | orchestrator | 08:32:32.439 STDOUT terraform:  + "nova", 2025-02-10 08:32:32.439803 | orchestrator | 08:32:32.439 STDOUT terraform:  ] 2025-02-10 08:32:32.439842 | orchestrator | 08:32:32.439 STDOUT terraform:  + distributed = (known after apply) 2025-02-10 08:32:32.439895 | orchestrator | 08:32:32.439 STDOUT terraform:  + enable_snat = (known after apply) 2025-02-10 08:32:32.439945 | orchestrator | 08:32:32.439 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-02-10 08:32:32.439996 | orchestrator | 08:32:32.439 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.440030 | orchestrator | 08:32:32.439 STDOUT terraform:  + name = "testbed" 2025-02-10 08:32:32.440069 | orchestrator | 08:32:32.440 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.440109 | orchestrator | 08:32:32.440 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:32.440140 | orchestrator | 08:32:32.440 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-02-10 08:32:32.440204 | orchestrator | 08:32:32.440 STDOUT terraform:  } 2025-02-10 08:32:32.440213 | orchestrator | 08:32:32.440 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-02-10 08:32:32.440262 | orchestrator | 08:32:32.440 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-02-10 08:32:32.440272 | orchestrator | 08:32:32.440 STDOUT terraform:  + description = "ssh" 2025-02-10 08:32:32.440305 | orchestrator | 08:32:32.440 STDOUT terraform:  + direction = "ingress" 2025-02-10 08:32:32.440314 | orchestrator | 08:32:32.440 STDOUT terraform:  + ethertype = "IPv4" 2025-02-10 08:32:32.440357 | orchestrator | 08:32:32.440 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.440367 | orchestrator | 08:32:32.440 STDOUT terraform:  + port_range_max = 22 2025-02-10 08:32:32.440402 | orchestrator | 08:32:32.440 STDOUT terraform:  + port_range_min = 22 2025-02-10 08:32:32.440435 | orchestrator | 08:32:32.440 STDOUT terraform:  + protocol = "tcp" 2025-02-10 08:32:32.440468 | orchestrator | 08:32:32.440 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.440498 | orchestrator | 08:32:32.440 STDOUT terraform:  + remote_group_id = (known after apply) 2025-02-10 08:32:32.440525 | orchestrator | 08:32:32.440 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-02-10 08:32:32.440557 | orchestrator | 08:32:32.440 STDOUT terraform:  + security_group_id = (known after apply) 2025-02-10 08:32:32.440600 | orchestrator | 08:32:32.440 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:32.440660 | orchestrator | 08:32:32.440 STDOUT terraform:  } 2025-02-10 08:32:32.440669 | orchestrator | 08:32:32.440 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-02-10 08:32:32.440715 | orchestrator | 08:32:32.440 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-02-10 08:32:32.440743 | orchestrator | 08:32:32.440 STDOUT terraform:  + description = "wireguard" 2025-02-10 08:32:32.440770 | orchestrator | 08:32:32.440 STDOUT terraform:  + direction = "ingress" 2025-02-10 08:32:32.440779 | orchestrator | 08:32:32.440 STDOUT terraform:  + ethertype = "IPv4" 2025-02-10 08:32:32.440821 | orchestrator | 08:32:32.440 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.440831 | orchestrator | 08:32:32.440 STDOUT terraform:  + port_range_max = 51820 2025-02-10 08:32:32.440861 | orchestrator | 08:32:32.440 STDOUT terraform:  + port_range_min = 51820 2025-02-10 08:32:32.440871 | orchestrator | 08:32:32.440 STDOUT terraform:  + protocol = "udp" 2025-02-10 08:32:32.440910 | orchestrator | 08:32:32.440 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.440941 | orchestrator | 08:32:32.440 STDOUT terraform:  + remote_group_id = (known after apply) 2025-02-10 08:32:32.440966 | orchestrator | 08:32:32.440 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-02-10 08:32:32.440997 | orchestrator | 08:32:32.440 STDOUT terraform:  + security_group_id = (known after apply) 2025-02-10 08:32:32.441030 | orchestrator | 08:32:32.440 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:32.441091 | orchestrator | 08:32:32.441 STDOUT terraform:  } 2025-02-10 08:32:32.441099 | orchestrator | 08:32:32.441 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-02-10 08:32:32.441149 | orchestrator | 08:32:32.441 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-02-10 08:32:32.441160 | orchestrator | 08:32:32.441 STDOUT terraform:  + direction = "ingress" 2025-02-10 08:32:32.441191 | orchestrator | 08:32:32.441 STDOUT terraform:  + ethertype = "IPv4" 2025-02-10 08:32:32.441223 | orchestrator | 08:32:32.441 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.441232 | orchestrator | 08:32:32.441 STDOUT terraform:  + protocol = "tcp" 2025-02-10 08:32:32.441273 | orchestrator | 08:32:32.441 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.441304 | orchestrator | 08:32:32.441 STDOUT terraform:  + remote_group_id = (known after apply) 2025-02-10 08:32:32.441335 | orchestrator | 08:32:32.441 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-02-10 08:32:32.441367 | orchestrator | 08:32:32.441 STDOUT terraform:  + security_group_id = (known after apply) 2025-02-10 08:32:32.441400 | orchestrator | 08:32:32.441 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:32.441463 | orchestrator | 08:32:32.441 STDOUT terraform:  } 2025-02-10 08:32:32.441472 | orchestrator | 08:32:32.441 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-02-10 08:32:32.441516 | orchestrator | 08:32:32.441 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-02-10 08:32:32.441542 | orchestrator | 08:32:32.441 STDOUT terraform:  + direction = "ingress" 2025-02-10 08:32:32.441566 | orchestrator | 08:32:32.441 STDOUT terraform:  + ethertype = "IPv4" 2025-02-10 08:32:32.441795 | orchestrator | 08:32:32.441 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.441899 | orchestrator | 08:32:32.441 STDOUT terraform:  + protocol = "udp" 2025-02-10 08:32:32.441945 | orchestrator | 08:32:32.441 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.441971 | orchestrator | 08:32:32.441 STDOUT terraform:  + remote_group_id = (known after apply) 2025-02-10 08:32:32.442005 | orchestrator | 08:32:32.441 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-02-10 08:32:32.442102 | orchestrator | 08:32:32.441 STDOUT terraform:  + security_group_id = (known after apply) 2025-02-10 08:32:32.442155 | orchestrator | 08:32:32.441 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:32.442190 | orchestrator | 08:32:32.441 STDOUT terraform:  } 2025-02-10 08:32:32.442216 | orchestrator | 08:32:32.441 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-02-10 08:32:32.442243 | orchestrator | 08:32:32.441 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-02-10 08:32:32.442277 | orchestrator | 08:32:32.441 STDOUT terraform:  + direction = "ingress" 2025-02-10 08:32:32.442304 | orchestrator | 08:32:32.441 STDOUT terraform:  + ethertype = "IPv4" 2025-02-10 08:32:32.442329 | orchestrator | 08:32:32.441 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.442354 | orchestrator | 08:32:32.441 STDOUT terraform:  + protocol = "icmp" 2025-02-10 08:32:32.442378 | orchestrator | 08:32:32.441 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.442401 | orchestrator | 08:32:32.441 STDOUT terraform:  + remote_group_id = (known after apply) 2025-02-10 08:32:32.442424 | orchestrator | 08:32:32.441 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-02-10 08:32:32.442448 | orchestrator | 08:32:32.442 STDOUT terraform:  + security_group_id = (known after apply) 2025-02-10 08:32:32.442474 | orchestrator | 08:32:32.442 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:32.442501 | orchestrator | 08:32:32.442 STDOUT terraform:  } 2025-02-10 08:32:32.442520 | orchestrator | 08:32:32.442 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-02-10 08:32:32.442540 | orchestrator | 08:32:32.442 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-02-10 08:32:32.442555 | orchestrator | 08:32:32.442 STDOUT terraform:  + direction = "ingress" 2025-02-10 08:32:32.442570 | orchestrator | 08:32:32.442 STDOUT terraform:  + ethertype = "IPv4" 2025-02-10 08:32:32.442633 | orchestrator | 08:32:32.442 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.442659 | orchestrator | 08:32:32.442 STDOUT terraform:  + protocol = "tcp" 2025-02-10 08:32:32.442686 | orchestrator | 08:32:32.442 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.442709 | orchestrator | 08:32:32.442 STDOUT terraform:  + remote_group_id = (known after apply) 2025-02-10 08:32:32.442733 | orchestrator | 08:32:32.442 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-02-10 08:32:32.442757 | orchestrator | 08:32:32.442 STDOUT terraform:  + security_group_id = (known after apply) 2025-02-10 08:32:32.442780 | orchestrator | 08:32:32.442 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:32.442802 | orchestrator | 08:32:32.442 STDOUT terraform:  } 2025-02-10 08:32:32.442825 | orchestrator | 08:32:32.442 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-02-10 08:32:32.442849 | orchestrator | 08:32:32.442 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-02-10 08:32:32.442873 | orchestrator | 08:32:32.442 STDOUT terraform:  + direction = "ingress" 2025-02-10 08:32:32.442916 | orchestrator | 08:32:32.442 STDOUT terraform:  + ethertype = "IPv4" 2025-02-10 08:32:32.442940 | orchestrator | 08:32:32.442 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.442965 | orchestrator | 08:32:32.442 STDOUT terraform:  + protocol = "udp" 2025-02-10 08:32:32.442991 | orchestrator | 08:32:32.442 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.443014 | orchestrator | 08:32:32.442 STDOUT terraform:  + remote_group_id = (known after apply) 2025-02-10 08:32:32.443038 | orchestrator | 08:32:32.442 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-02-10 08:32:32.443093 | orchestrator | 08:32:32.442 STDOUT terraform:  + security_group_id = (known after apply) 2025-02-10 08:32:32.443121 | orchestrator | 08:32:32.442 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:32.443145 | orchestrator | 08:32:32.442 STDOUT terraform:  } 2025-02-10 08:32:32.443162 | orchestrator | 08:32:32.442 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-02-10 08:32:32.443177 | orchestrator | 08:32:32.442 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-02-10 08:32:32.443192 | orchestrator | 08:32:32.442 STDOUT terraform:  + direction = "ingress" 2025-02-10 08:32:32.443206 | orchestrator | 08:32:32.442 STDOUT terraform:  + ethertype = "IPv4" 2025-02-10 08:32:32.443226 | orchestrator | 08:32:32.442 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.443242 | orchestrator | 08:32:32.442 STDOUT terraform:  + protocol = "icmp" 2025-02-10 08:32:32.443257 | orchestrator | 08:32:32.442 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.443271 | orchestrator | 08:32:32.442 STDOUT terraform:  + remote_group_id = (known after apply) 2025-02-10 08:32:32.443286 | orchestrator | 08:32:32.442 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-02-10 08:32:32.443300 | orchestrator | 08:32:32.443 STDOUT terraform:  + security_group_id = (known after apply) 2025-02-10 08:32:32.443315 | orchestrator | 08:32:32.443 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:32.443330 | orchestrator | 08:32:32.443 STDOUT terraform:  } 2025-02-10 08:32:32.443344 | orchestrator | 08:32:32.443 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-02-10 08:32:32.443359 | orchestrator | 08:32:32.443 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-02-10 08:32:32.443375 | orchestrator | 08:32:32.443 STDOUT terraform:  + description = "vrrp" 2025-02-10 08:32:32.443389 | orchestrator | 08:32:32.443 STDOUT terraform:  + direction = "ingress" 2025-02-10 08:32:32.443404 | orchestrator | 08:32:32.443 STDOUT terraform:  + ethertype = "IPv4" 2025-02-10 08:32:32.443435 | orchestrator | 08:32:32.443 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.443452 | orchestrator | 08:32:32.443 STDOUT terraform:  + protocol = "112" 2025-02-10 08:32:32.443466 | orchestrator | 08:32:32.443 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.443501 | orchestrator | 08:32:32.443 STDOUT terraform:  + remote_group_id = (known after apply) 2025-02-10 08:32:32.443517 | orchestrator | 08:32:32.443 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-02-10 08:32:32.443531 | orchestrator | 08:32:32.443 STDOUT terraform:  + security_group_id = (known after apply) 2025-02-10 08:32:32.443546 | orchestrator | 08:32:32.443 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:32.443562 | orchestrator | 08:32:32.443 STDOUT terraform:  } 2025-02-10 08:32:32.443603 | orchestrator | 08:32:32.443 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-02-10 08:32:32.443629 | orchestrator | 08:32:32.443 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-02-10 08:32:32.443654 | orchestrator | 08:32:32.443 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:32.443670 | orchestrator | 08:32:32.443 STDOUT terraform:  + description = "management security group" 2025-02-10 08:32:32.443685 | orchestrator | 08:32:32.443 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.443704 | orchestrator | 08:32:32.443 STDOUT terraform:  + name = "testbed-management" 2025-02-10 08:32:32.443761 | orchestrator | 08:32:32.443 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.443778 | orchestrator | 08:32:32.443 STDOUT terraform:  + stateful = (known after apply) 2025-02-10 08:32:32.443792 | orchestrator | 08:32:32.443 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:32.443807 | orchestrator | 08:32:32.443 STDOUT terraform:  } 2025-02-10 08:32:32.443842 | orchestrator | 08:32:32.443 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-02-10 08:32:32.443861 | orchestrator | 08:32:32.443 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-02-10 08:32:32.443877 | orchestrator | 08:32:32.443 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:32.443893 | orchestrator | 08:32:32.443 STDOUT terraform:  + description = "node security group" 2025-02-10 08:32:32.443912 | orchestrator | 08:32:32.443 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.443954 | orchestrator | 08:32:32.443 STDOUT terraform:  + name = "testbed-node" 2025-02-10 08:32:32.443974 | orchestrator | 08:32:32.443 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.443994 | orchestrator | 08:32:32.443 STDOUT terraform:  + stateful = (known after apply) 2025-02-10 08:32:32.444033 | orchestrator | 08:32:32.443 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:32.444049 | orchestrator | 08:32:32.443 STDOUT terraform:  } 2025-02-10 08:32:32.444070 | orchestrator | 08:32:32.443 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-02-10 08:32:32.444110 | orchestrator | 08:32:32.444 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-02-10 08:32:32.444132 | orchestrator | 08:32:32.444 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:32.444148 | orchestrator | 08:32:32.444 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-02-10 08:32:32.444167 | orchestrator | 08:32:32.444 STDOUT terraform:  + dns_nameservers = [ 2025-02-10 08:32:32.444193 | orchestrator | 08:32:32.444 STDOUT terraform:  + "8.8.8.8", 2025-02-10 08:32:32.444208 | orchestrator | 08:32:32.444 STDOUT terraform:  + "9.9.9.9", 2025-02-10 08:32:32.444227 | orchestrator | 08:32:32.444 STDOUT terraform:  ] 2025-02-10 08:32:32.444243 | orchestrator | 08:32:32.444 STDOUT terraform:  + enable_dhcp = true 2025-02-10 08:32:32.444257 | orchestrator | 08:32:32.444 STDOUT terraform:  + gateway_ip = (known after apply) 2025-02-10 08:32:32.444277 | orchestrator | 08:32:32.444 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.444292 | orchestrator | 08:32:32.444 STDOUT terraform:  + ip_version = 4 2025-02-10 08:32:32.444310 | orchestrator | 08:32:32.444 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-02-10 08:32:32.444330 | orchestrator | 08:32:32.444 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-02-10 08:32:32.444350 | orchestrator | 08:32:32.444 STDOUT terraform:  + name = "subnet-testbed-management" 2025-02-10 08:32:32.444399 | orchestrator | 08:32:32.444 STDOUT terraform:  + network_id = (known after apply) 2025-02-10 08:32:32.444443 | orchestrator | 08:32:32.444 STDOUT terraform:  + no_gateway = false 2025-02-10 08:32:32.444463 | orchestrator | 08:32:32.444 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:32.444505 | orchestrator | 08:32:32.444 STDOUT terraform:  + service_types = (known after apply) 2025-02-10 08:32:32.444526 | orchestrator | 08:32:32.444 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:32.444566 | orchestrator | 08:32:32.444 STDOUT terraform:  + allocation_pool { 2025-02-10 08:32:32.444618 | orchestrator | 08:32:32.444 STDOUT terraform:  + end = "192.168.31.250" 2025-02-10 08:32:32.444654 | orchestrator | 08:32:32.444 STDOUT terraform:  + start = "192.168.31.200" 2025-02-10 08:32:32.444671 | orchestrator | 08:32:32.444 STDOUT terraform:  } 2025-02-10 08:32:32.444686 | orchestrator | 08:32:32.444 STDOUT terraform:  } 2025-02-10 08:32:32.444701 | orchestrator | 08:32:32.444 STDOUT terraform:  # terraform_data.image will be created 2025-02-10 08:32:32.444716 | orchestrator | 08:32:32.444 STDOUT terraform:  + resource "terraform_data" "image" { 2025-02-10 08:32:32.444730 | orchestrator | 08:32:32.444 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.444749 | orchestrator | 08:32:32.444 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-02-10 08:32:32.444764 | orchestrator | 08:32:32.444 STDOUT terraform:  + output = (known after apply) 2025-02-10 08:32:32.444779 | orchestrator | 08:32:32.444 STDOUT terraform:  } 2025-02-10 08:32:32.444794 | orchestrator | 08:32:32.444 STDOUT terraform:  # terraform_data.image_node will be created 2025-02-10 08:32:32.444809 | orchestrator | 08:32:32.444 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-02-10 08:32:32.444828 | orchestrator | 08:32:32.444 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:32.444843 | orchestrator | 08:32:32.444 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-02-10 08:32:32.444858 | orchestrator | 08:32:32.444 STDOUT terraform:  + output = (known after apply) 2025-02-10 08:32:32.444873 | orchestrator | 08:32:32.444 STDOUT terraform:  } 2025-02-10 08:32:32.444899 | orchestrator | 08:32:32.444 STDOUT terraform: Plan: 82 to add, 0 to change, 0 to destroy. 2025-02-10 08:32:32.444918 | orchestrator | 08:32:32.444 STDOUT terraform: Changes to Outputs: 2025-02-10 08:32:32.672260 | orchestrator | 08:32:32.444 STDOUT terraform:  + manager_address = (sensitive value) 2025-02-10 08:32:32.672351 | orchestrator | 08:32:32.444 STDOUT terraform:  + private_key = (sensitive value) 2025-02-10 08:32:32.672371 | orchestrator | 08:32:32.671 STDOUT terraform: terraform_data.image_node: Creating... 2025-02-10 08:32:32.673543 | orchestrator | 08:32:32.672 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=035fc614-3db4-d061-b783-324aa4264eb4] 2025-02-10 08:32:32.673675 | orchestrator | 08:32:32.672 STDOUT terraform: terraform_data.image: Creating... 2025-02-10 08:32:32.673707 | orchestrator | 08:32:32.673 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=062ef291-1ecd-8dc8-bb03-589f5b3154b2] 2025-02-10 08:32:32.687930 | orchestrator | 08:32:32.687 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-02-10 08:32:32.690076 | orchestrator | 08:32:32.689 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-02-10 08:32:32.708064 | orchestrator | 08:32:32.707 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-02-10 08:32:32.708948 | orchestrator | 08:32:32.708 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-02-10 08:32:32.709317 | orchestrator | 08:32:32.708 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-02-10 08:32:32.709340 | orchestrator | 08:32:32.709 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-02-10 08:32:32.709515 | orchestrator | 08:32:32.709 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creating... 2025-02-10 08:32:32.710640 | orchestrator | 08:32:32.710 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-02-10 08:32:32.719878 | orchestrator | 08:32:32.710 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-02-10 08:32:32.719928 | orchestrator | 08:32:32.719 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-02-10 08:32:33.338334 | orchestrator | 08:32:33.337 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-02-10 08:32:33.345150 | orchestrator | 08:32:33.344 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-02-10 08:32:33.346243 | orchestrator | 08:32:33.345 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creating... 2025-02-10 08:32:33.352379 | orchestrator | 08:32:33.352 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creating... 2025-02-10 08:32:33.639197 | orchestrator | 08:32:33.638 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-02-10 08:32:33.648464 | orchestrator | 08:32:33.648 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-02-10 08:32:38.806463 | orchestrator | 08:32:38.805 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=b74ee23f-a170-4fcb-b5ff-9a97214c79a1] 2025-02-10 08:32:38.812839 | orchestrator | 08:32:38.812 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-02-10 08:32:42.709233 | orchestrator | 08:32:42.708 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-02-10 08:32:42.711672 | orchestrator | 08:32:42.711 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-02-10 08:32:42.711798 | orchestrator | 08:32:42.711 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-02-10 08:32:42.711837 | orchestrator | 08:32:42.711 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Still creating... [10s elapsed] 2025-02-10 08:32:42.712819 | orchestrator | 08:32:42.711 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-02-10 08:32:42.712878 | orchestrator | 08:32:42.712 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-02-10 08:32:43.348241 | orchestrator | 08:32:43.347 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Still creating... [10s elapsed] 2025-02-10 08:32:43.353525 | orchestrator | 08:32:43.353 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Still creating... [10s elapsed] 2025-02-10 08:32:43.509736 | orchestrator | 08:32:43.509 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 11s [id=c3934c93-3cd2-4fec-bdf3-cbeea6813a64] 2025-02-10 08:32:43.516198 | orchestrator | 08:32:43.515 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creating... 2025-02-10 08:32:43.521041 | orchestrator | 08:32:43.520 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 11s [id=65f68ad4-1f17-45a3-95a2-0b9d82b524cb] 2025-02-10 08:32:43.526354 | orchestrator | 08:32:43.526 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creating... 2025-02-10 08:32:43.536314 | orchestrator | 08:32:43.536 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 11s [id=4ec8f61f-9e5b-49cd-9e82-40bf07cffc70] 2025-02-10 08:32:43.546916 | orchestrator | 08:32:43.546 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creating... 2025-02-10 08:32:43.557900 | orchestrator | 08:32:43.557 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creation complete after 11s [id=23794fae-2c08-458a-becf-a15050b8218b] 2025-02-10 08:32:43.563984 | orchestrator | 08:32:43.563 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creating... 2025-02-10 08:32:43.594742 | orchestrator | 08:32:43.594 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 11s [id=be832b54-23bf-4f17-8551-69f0e04b6625] 2025-02-10 08:32:43.600755 | orchestrator | 08:32:43.600 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-02-10 08:32:43.621610 | orchestrator | 08:32:43.621 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creation complete after 11s [id=494ee814-0dd9-4f0f-8082-b266e2c53997] 2025-02-10 08:32:43.626221 | orchestrator | 08:32:43.625 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creation complete after 11s [id=91337675-2774-4bb7-b881-e3b3f642e46a] 2025-02-10 08:32:43.631976 | orchestrator | 08:32:43.631 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creating... 2025-02-10 08:32:43.632145 | orchestrator | 08:32:43.631 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 11s [id=96415da1-6a76-4477-bfa7-f065f33f8e6a] 2025-02-10 08:32:43.632229 | orchestrator | 08:32:43.632 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-02-10 08:32:43.639564 | orchestrator | 08:32:43.639 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creating... 2025-02-10 08:32:43.649185 | orchestrator | 08:32:43.649 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-02-10 08:32:43.841575 | orchestrator | 08:32:43.841 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=5ff65196-c1cf-41f3-a955-25be0154b459] 2025-02-10 08:32:43.853946 | orchestrator | 08:32:43.853 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-02-10 08:32:48.813576 | orchestrator | 08:32:48.813 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-02-10 08:32:48.973816 | orchestrator | 08:32:48.973 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 10s [id=103f3392-831d-4ee6-b0f0-d6be015816d3] 2025-02-10 08:32:48.985088 | orchestrator | 08:32:48.984 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-02-10 08:32:53.517438 | orchestrator | 08:32:53.517 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Still creating... [10s elapsed] 2025-02-10 08:32:53.527417 | orchestrator | 08:32:53.527 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Still creating... [10s elapsed] 2025-02-10 08:32:53.549828 | orchestrator | 08:32:53.549 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Still creating... [10s elapsed] 2025-02-10 08:32:53.564359 | orchestrator | 08:32:53.564 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Still creating... [10s elapsed] 2025-02-10 08:32:53.602089 | orchestrator | 08:32:53.601 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-02-10 08:32:53.631111 | orchestrator | 08:32:53.630 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Still creating... [10s elapsed] 2025-02-10 08:32:53.633241 | orchestrator | 08:32:53.633 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-02-10 08:32:53.641626 | orchestrator | 08:32:53.641 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Still creating... [10s elapsed] 2025-02-10 08:32:53.702957 | orchestrator | 08:32:53.702 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creation complete after 10s [id=492baa9f-f661-44dd-a3d2-70d79942748c] 2025-02-10 08:32:53.728948 | orchestrator | 08:32:53.728 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-02-10 08:32:53.734912 | orchestrator | 08:32:53.734 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creation complete after 10s [id=809e68db-7594-4e4e-90c0-4a7ae6eb5d4d] 2025-02-10 08:32:53.751312 | orchestrator | 08:32:53.751 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creation complete after 10s [id=a31d8f91-c02a-4f65-9bd6-abd5e53b34f2] 2025-02-10 08:32:53.758431 | orchestrator | 08:32:53.758 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-02-10 08:32:53.758705 | orchestrator | 08:32:53.758 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-02-10 08:32:53.771289 | orchestrator | 08:32:53.770 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=4c6139c98c8ed8bbd73f2712abe901baf68d30e1] 2025-02-10 08:32:53.780453 | orchestrator | 08:32:53.780 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-02-10 08:32:53.810935 | orchestrator | 08:32:53.810 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creation complete after 10s [id=086c202d-0ccf-4be9-aa6b-e4e971478b82] 2025-02-10 08:32:53.824506 | orchestrator | 08:32:53.824 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-02-10 08:32:53.843458 | orchestrator | 08:32:53.842 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 10s [id=d66bf247-6327-430d-be20-e0df09e5016f] 2025-02-10 08:32:53.848873 | orchestrator | 08:32:53.848 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-02-10 08:32:53.854958 | orchestrator | 08:32:53.854 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-02-10 08:32:53.861047 | orchestrator | 08:32:53.860 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 10s [id=094c1351-6c25-40a9-b10a-7f3d6a96f205] 2025-02-10 08:32:53.872734 | orchestrator | 08:32:53.872 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creation complete after 10s [id=c1ae1e45-2170-46e6-8462-912ee8672daa] 2025-02-10 08:32:53.873943 | orchestrator | 08:32:53.873 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-02-10 08:32:53.890646 | orchestrator | 08:32:53.890 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creation complete after 10s [id=734cf6c7-c554-44f9-b9cd-702600de9593] 2025-02-10 08:32:53.893542 | orchestrator | 08:32:53.893 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-02-10 08:32:53.895081 | orchestrator | 08:32:53.894 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=7ae97f08cdc540fe1d93d1c724939a7deb413f59] 2025-02-10 08:32:54.203196 | orchestrator | 08:32:54.202 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=cefa0579-c853-4b9b-8f2b-0cb67bd2fa53] 2025-02-10 08:32:58.986778 | orchestrator | 08:32:58.986 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-02-10 08:32:59.331356 | orchestrator | 08:32:59.330 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 10s [id=151fea73-21a0-4011-a292-7d2582f49900] 2025-02-10 08:32:59.673547 | orchestrator | 08:32:59.673 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=2563a08b-eaf0-44fd-876c-3ac712996e96] 2025-02-10 08:32:59.685825 | orchestrator | 08:32:59.685 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-02-10 08:33:03.729702 | orchestrator | 08:33:03.729 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-02-10 08:33:03.760033 | orchestrator | 08:33:03.759 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-02-10 08:33:03.781544 | orchestrator | 08:33:03.781 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-02-10 08:33:03.826242 | orchestrator | 08:33:03.825 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-02-10 08:33:03.849756 | orchestrator | 08:33:03.849 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-02-10 08:33:04.110219 | orchestrator | 08:33:04.109 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 10s [id=8bdd934e-bb0e-4b1c-85b6-0f94f416d8b7] 2025-02-10 08:33:04.155230 | orchestrator | 08:33:04.154 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 10s [id=7bb1a57e-a3aa-41a1-8378-2ba5a5124dde] 2025-02-10 08:33:04.179450 | orchestrator | 08:33:04.178 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 10s [id=0279abfd-66e1-4206-bc3d-37e10a9f78bb] 2025-02-10 08:33:04.213342 | orchestrator | 08:33:04.212 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=4aefdc38-3054-474e-a34a-07d97ce8643d] 2025-02-10 08:33:04.214196 | orchestrator | 08:33:04.213 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=4c85cd12-85f6-4a1e-a24f-730d9e6d165f] 2025-02-10 08:33:06.415285 | orchestrator | 08:33:06.414 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 6s [id=a5f63a65-0b2a-4191-a3f0-331dd4bad72d] 2025-02-10 08:33:06.421919 | orchestrator | 08:33:06.421 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-02-10 08:33:06.422276 | orchestrator | 08:33:06.422 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-02-10 08:33:06.427518 | orchestrator | 08:33:06.427 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-02-10 08:33:06.546224 | orchestrator | 08:33:06.545 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=e70746da-2f59-47bd-8d74-d928063718e5] 2025-02-10 08:33:06.561365 | orchestrator | 08:33:06.559 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-02-10 08:33:06.570101 | orchestrator | 08:33:06.559 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-02-10 08:33:06.570176 | orchestrator | 08:33:06.559 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-02-10 08:33:06.570193 | orchestrator | 08:33:06.568 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-02-10 08:33:06.572328 | orchestrator | 08:33:06.572 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=c7926e1f-5e8d-4604-b015-63d2a2fbd8c7] 2025-02-10 08:33:06.577286 | orchestrator | 08:33:06.577 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-02-10 08:33:06.578142 | orchestrator | 08:33:06.578 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-02-10 08:33:06.579641 | orchestrator | 08:33:06.579 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-02-10 08:33:06.580555 | orchestrator | 08:33:06.580 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-02-10 08:33:06.581776 | orchestrator | 08:33:06.581 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-02-10 08:33:06.674441 | orchestrator | 08:33:06.674 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=e2cf1d32-38e6-436e-94f0-fe18a09ab591] 2025-02-10 08:33:06.685803 | orchestrator | 08:33:06.685 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-02-10 08:33:06.792520 | orchestrator | 08:33:06.792 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=5a8c3fc7-3bd4-4a70-9ab9-2c6f2c9e603d] 2025-02-10 08:33:06.809890 | orchestrator | 08:33:06.809 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-02-10 08:33:06.821225 | orchestrator | 08:33:06.813 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=f2a3ddf7-a250-4d16-aca7-52dd7d09c2ae] 2025-02-10 08:33:06.829565 | orchestrator | 08:33:06.829 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-02-10 08:33:07.024328 | orchestrator | 08:33:07.023 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=2de0aaa4-f284-40ce-9330-7cb38ac1f9ab] 2025-02-10 08:33:07.039207 | orchestrator | 08:33:07.038 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-02-10 08:33:07.080784 | orchestrator | 08:33:07.080 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=d75ad0f7-85b0-498b-8a91-6e02d6388b78] 2025-02-10 08:33:07.097019 | orchestrator | 08:33:07.096 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-02-10 08:33:07.220146 | orchestrator | 08:33:07.219 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=b54ef1fb-e77e-498d-a685-d1ac7277777d] 2025-02-10 08:33:07.234206 | orchestrator | 08:33:07.233 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-02-10 08:33:07.545528 | orchestrator | 08:33:07.545 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=b27fc800-3cca-42cc-99c9-e925f787862a] 2025-02-10 08:33:07.561055 | orchestrator | 08:33:07.560 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-02-10 08:33:07.661722 | orchestrator | 08:33:07.661 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=76a399c1-fbb7-462f-adc9-474116ca9b17] 2025-02-10 08:33:07.883976 | orchestrator | 08:33:07.883 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=6f878071-4c66-4ca5-8b91-c1105cd9ab43] 2025-02-10 08:33:12.387858 | orchestrator | 08:33:12.387 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 5s [id=26650198-2949-468b-b7ec-7e29631fdf11] 2025-02-10 08:33:12.522895 | orchestrator | 08:33:12.522 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=6a11824b-a7fb-46d8-901f-38995b84583d] 2025-02-10 08:33:12.570250 | orchestrator | 08:33:12.569 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=b122ba1f-4eee-461c-b93f-4f26810da0fe] 2025-02-10 08:33:13.113323 | orchestrator | 08:33:13.112 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=e285733c-df78-4f52-93ae-d47fffac168d] 2025-02-10 08:33:13.128710 | orchestrator | 08:33:13.128 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=e1c56074-3714-44ab-84b3-4f24069b590e] 2025-02-10 08:33:13.345683 | orchestrator | 08:33:13.345 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=052af1b3-7772-46d7-a098-54b45fd10c0f] 2025-02-10 08:33:13.437620 | orchestrator | 08:33:13.437 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=3956339e-3590-4230-989a-cc262218aa50] 2025-02-10 08:33:14.117336 | orchestrator | 08:33:14.116 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 8s [id=ba0a7df4-128c-4664-95e2-19763079722b] 2025-02-10 08:33:14.149808 | orchestrator | 08:33:14.149 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-02-10 08:33:14.164417 | orchestrator | 08:33:14.164 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-02-10 08:33:14.168156 | orchestrator | 08:33:14.168 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-02-10 08:33:14.169613 | orchestrator | 08:33:14.169 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-02-10 08:33:14.179169 | orchestrator | 08:33:14.179 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-02-10 08:33:14.182447 | orchestrator | 08:33:14.182 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-02-10 08:33:14.187663 | orchestrator | 08:33:14.187 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-02-10 08:33:20.463296 | orchestrator | 08:33:20.462 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 6s [id=7b893bf0-3fc9-4d14-90cc-314e3830cb7a] 2025-02-10 08:33:20.483152 | orchestrator | 08:33:20.482 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-02-10 08:33:20.484151 | orchestrator | 08:33:20.483 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-02-10 08:33:20.484195 | orchestrator | 08:33:20.484 STDOUT terraform: local_file.inventory: Creating... 2025-02-10 08:33:20.489937 | orchestrator | 08:33:20.489 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=f7791594e0f809d88bce5737936e2d621a8dd7e1] 2025-02-10 08:33:20.491516 | orchestrator | 08:33:20.491 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=980ab9e04798b818a9befa08fea9becca0475620] 2025-02-10 08:33:20.968698 | orchestrator | 08:33:20.968 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=7b893bf0-3fc9-4d14-90cc-314e3830cb7a] 2025-02-10 08:33:24.168846 | orchestrator | 08:33:24.168 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-02-10 08:33:24.179238 | orchestrator | 08:33:24.178 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-02-10 08:33:24.179358 | orchestrator | 08:33:24.179 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-02-10 08:33:24.182450 | orchestrator | 08:33:24.182 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-02-10 08:33:24.186448 | orchestrator | 08:33:24.186 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-02-10 08:33:24.187666 | orchestrator | 08:33:24.187 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-02-10 08:33:34.169314 | orchestrator | 08:33:34.168 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-02-10 08:33:34.180047 | orchestrator | 08:33:34.179 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-02-10 08:33:34.180181 | orchestrator | 08:33:34.179 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-02-10 08:33:34.183274 | orchestrator | 08:33:34.182 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-02-10 08:33:34.187382 | orchestrator | 08:33:34.187 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-02-10 08:33:34.188542 | orchestrator | 08:33:34.188 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-02-10 08:33:44.169874 | orchestrator | 08:33:44.169 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-02-10 08:33:44.181208 | orchestrator | 08:33:44.180 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-02-10 08:33:44.184356 | orchestrator | 08:33:44.181 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-02-10 08:33:44.184435 | orchestrator | 08:33:44.184 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-02-10 08:33:44.187413 | orchestrator | 08:33:44.187 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-02-10 08:33:44.188602 | orchestrator | 08:33:44.188 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-02-10 08:33:44.735961 | orchestrator | 08:33:44.735 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=572db54e-4929-4ea7-8010-4df3e8ce17b8] 2025-02-10 08:33:44.975466 | orchestrator | 08:33:44.974 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=58d8e3cc-2662-435a-9bb4-392870b91836] 2025-02-10 08:33:45.322626 | orchestrator | 08:33:45.322 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=859d9885-bfac-4fd5-a9dd-b93f7aab843e] 2025-02-10 08:33:45.537505 | orchestrator | 08:33:45.537 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 32s [id=20fb8e47-5912-438c-afe5-79e78257fe82] 2025-02-10 08:33:54.173015 | orchestrator | 08:33:54.172 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2025-02-10 08:33:54.185629 | orchestrator | 08:33:54.185 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2025-02-10 08:33:55.066916 | orchestrator | 08:33:55.066 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 41s [id=0f7810e5-0e3d-4a47-9283-4e3b6772fced] 2025-02-10 08:33:55.075719 | orchestrator | 08:33:55.075 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 41s [id=f911f2a9-b49e-465c-bfd8-ac7e4b6608f8] 2025-02-10 08:33:55.099504 | orchestrator | 08:33:55.099 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-02-10 08:33:55.110045 | orchestrator | 08:33:55.109 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creating... 2025-02-10 08:33:55.116326 | orchestrator | 08:33:55.109 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-02-10 08:33:55.116392 | orchestrator | 08:33:55.116 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creating... 2025-02-10 08:33:55.125483 | orchestrator | 08:33:55.125 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creating... 2025-02-10 08:33:55.129471 | orchestrator | 08:33:55.129 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creating... 2025-02-10 08:33:55.130558 | orchestrator | 08:33:55.130 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creating... 2025-02-10 08:33:55.132641 | orchestrator | 08:33:55.132 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=276655464195174931] 2025-02-10 08:33:55.133737 | orchestrator | 08:33:55.133 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creating... 2025-02-10 08:33:55.147950 | orchestrator | 08:33:55.147 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-02-10 08:33:55.149816 | orchestrator | 08:33:55.149 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creating... 2025-02-10 08:33:55.161098 | orchestrator | 08:33:55.160 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-02-10 08:34:01.529191 | orchestrator | 08:34:01.528 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creation complete after 7s [id=0f7810e5-0e3d-4a47-9283-4e3b6772fced/492baa9f-f661-44dd-a3d2-70d79942748c] 2025-02-10 08:34:01.530728 | orchestrator | 08:34:01.530 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creation complete after 7s [id=58d8e3cc-2662-435a-9bb4-392870b91836/91337675-2774-4bb7-b881-e3b3f642e46a] 2025-02-10 08:34:01.546724 | orchestrator | 08:34:01.546 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creation complete after 7s [id=859d9885-bfac-4fd5-a9dd-b93f7aab843e/086c202d-0ccf-4be9-aa6b-e4e971478b82] 2025-02-10 08:34:01.549241 | orchestrator | 08:34:01.549 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creating... 2025-02-10 08:34:01.550308 | orchestrator | 08:34:01.550 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 7s [id=20fb8e47-5912-438c-afe5-79e78257fe82/65f68ad4-1f17-45a3-95a2-0b9d82b524cb] 2025-02-10 08:34:01.551883 | orchestrator | 08:34:01.551 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-02-10 08:34:01.557823 | orchestrator | 08:34:01.557 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creation complete after 7s [id=f911f2a9-b49e-465c-bfd8-ac7e4b6608f8/809e68db-7594-4e4e-90c0-4a7ae6eb5d4d] 2025-02-10 08:34:01.571158 | orchestrator | 08:34:01.570 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creation complete after 7s [id=20fb8e47-5912-438c-afe5-79e78257fe82/734cf6c7-c554-44f9-b9cd-702600de9593] 2025-02-10 08:34:01.573550 | orchestrator | 08:34:01.573 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creation complete after 7s [id=572db54e-4929-4ea7-8010-4df3e8ce17b8/c1ae1e45-2170-46e6-8462-912ee8672daa] 2025-02-10 08:34:01.574243 | orchestrator | 08:34:01.574 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-02-10 08:34:01.576457 | orchestrator | 08:34:01.576 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-02-10 08:34:01.580049 | orchestrator | 08:34:01.579 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creation complete after 7s [id=0f7810e5-0e3d-4a47-9283-4e3b6772fced/23794fae-2c08-458a-becf-a15050b8218b] 2025-02-10 08:34:01.588888 | orchestrator | 08:34:01.588 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creating... 2025-02-10 08:34:01.591844 | orchestrator | 08:34:01.591 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 7s [id=f911f2a9-b49e-465c-bfd8-ac7e4b6608f8/be832b54-23bf-4f17-8551-69f0e04b6625] 2025-02-10 08:34:01.598430 | orchestrator | 08:34:01.598 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-02-10 08:34:01.601333 | orchestrator | 08:34:01.601 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-02-10 08:34:01.603400 | orchestrator | 08:34:01.603 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-02-10 08:34:01.615361 | orchestrator | 08:34:01.615 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-02-10 08:34:01.670565 | orchestrator | 08:34:01.670 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 7s [id=0f7810e5-0e3d-4a47-9283-4e3b6772fced/103f3392-831d-4ee6-b0f0-d6be015816d3] 2025-02-10 08:34:06.916560 | orchestrator | 08:34:06.914 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creation complete after 5s [id=f911f2a9-b49e-465c-bfd8-ac7e4b6608f8/a31d8f91-c02a-4f65-9bd6-abd5e53b34f2] 2025-02-10 08:34:06.971363 | orchestrator | 08:34:06.970 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=58d8e3cc-2662-435a-9bb4-392870b91836/d66bf247-6327-430d-be20-e0df09e5016f] 2025-02-10 08:34:06.971780 | orchestrator | 08:34:06.971 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creation complete after 5s [id=859d9885-bfac-4fd5-a9dd-b93f7aab843e/494ee814-0dd9-4f0f-8082-b266e2c53997] 2025-02-10 08:34:06.972114 | orchestrator | 08:34:06.971 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=20fb8e47-5912-438c-afe5-79e78257fe82/5ff65196-c1cf-41f3-a955-25be0154b459] 2025-02-10 08:34:06.973491 | orchestrator | 08:34:06.973 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=572db54e-4929-4ea7-8010-4df3e8ce17b8/96415da1-6a76-4477-bfa7-f065f33f8e6a] 2025-02-10 08:34:06.979105 | orchestrator | 08:34:06.978 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=58d8e3cc-2662-435a-9bb4-392870b91836/c3934c93-3cd2-4fec-bdf3-cbeea6813a64] 2025-02-10 08:34:07.088665 | orchestrator | 08:34:07.088 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 5s [id=859d9885-bfac-4fd5-a9dd-b93f7aab843e/094c1351-6c25-40a9-b10a-7f3d6a96f205] 2025-02-10 08:34:07.115901 | orchestrator | 08:34:07.115 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=572db54e-4929-4ea7-8010-4df3e8ce17b8/4ec8f61f-9e5b-49cd-9e82-40bf07cffc70] 2025-02-10 08:34:11.618470 | orchestrator | 08:34:11.617 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-02-10 08:34:21.626926 | orchestrator | 08:34:21.623 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-02-10 08:34:22.218518 | orchestrator | 08:34:22.218 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=a9a40219-db92-4e1d-9673-3acca6b66829] 2025-02-10 08:34:22.239283 | orchestrator | 08:34:22.238 STDOUT terraform: Apply complete! Resources: 82 added, 0 changed, 0 destroyed. 2025-02-10 08:34:22.239384 | orchestrator | 08:34:22.239 STDOUT terraform: Outputs: 2025-02-10 08:34:22.239455 | orchestrator | 08:34:22.239 STDOUT terraform: manager_address = 2025-02-10 08:34:22.250136 | orchestrator | 08:34:22.239 STDOUT terraform: private_key = 2025-02-10 08:34:32.788953 | orchestrator | changed 2025-02-10 08:34:32.824907 | 2025-02-10 08:34:32.825034 | TASK [Fetch manager address] 2025-02-10 08:34:33.266572 | orchestrator | ok 2025-02-10 08:34:33.278199 | 2025-02-10 08:34:33.278328 | TASK [Set manager_host address] 2025-02-10 08:34:33.391380 | orchestrator | ok 2025-02-10 08:34:33.401558 | 2025-02-10 08:34:33.401669 | LOOP [Update ansible collections] 2025-02-10 08:34:37.725899 | orchestrator | changed 2025-02-10 08:34:41.653291 | orchestrator | changed 2025-02-10 08:34:41.679562 | 2025-02-10 08:34:41.679757 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-02-10 08:34:52.251441 | orchestrator | ok 2025-02-10 08:34:52.262788 | 2025-02-10 08:34:52.262907 | TASK [Wait a little longer for the manager so that everything is ready] 2025-02-10 08:35:52.312286 | orchestrator | ok 2025-02-10 08:35:52.324055 | 2025-02-10 08:35:52.324177 | TASK [Fetch manager ssh hostkey] 2025-02-10 08:35:53.904819 | orchestrator | Output suppressed because no_log was given 2025-02-10 08:35:53.923139 | 2025-02-10 08:35:53.923281 | TASK [Get ssh keypair from terraform environment] 2025-02-10 08:35:54.469220 | orchestrator | changed 2025-02-10 08:35:54.488846 | 2025-02-10 08:35:54.488994 | TASK [Point out that the following task takes some time and does not give any output] 2025-02-10 08:35:54.540244 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-02-10 08:35:54.551295 | 2025-02-10 08:35:54.551405 | TASK [Run manager part 0] 2025-02-10 08:35:55.480839 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-02-10 08:35:55.541289 | orchestrator | 2025-02-10 08:35:57.437073 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-02-10 08:35:57.437137 | orchestrator | 2025-02-10 08:35:57.437156 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-02-10 08:35:57.437174 | orchestrator | ok: [testbed-manager] 2025-02-10 08:35:59.446949 | orchestrator | 2025-02-10 08:35:59.447040 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-02-10 08:35:59.447054 | orchestrator | 2025-02-10 08:35:59.447061 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-10 08:35:59.447078 | orchestrator | ok: [testbed-manager] 2025-02-10 08:36:00.127906 | orchestrator | 2025-02-10 08:36:00.127974 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-02-10 08:36:00.127994 | orchestrator | ok: [testbed-manager] 2025-02-10 08:36:00.192151 | orchestrator | 2025-02-10 08:36:00.192228 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-02-10 08:36:00.192248 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:36:00.222857 | orchestrator | 2025-02-10 08:36:00.222909 | orchestrator | TASK [Update package cache] **************************************************** 2025-02-10 08:36:00.222927 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:36:00.248639 | orchestrator | 2025-02-10 08:36:00.248685 | orchestrator | TASK [Install required packages] *********************************************** 2025-02-10 08:36:00.248700 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:36:00.276146 | orchestrator | 2025-02-10 08:36:00.276194 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-02-10 08:36:00.276209 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:36:00.305181 | orchestrator | 2025-02-10 08:36:00.305220 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-02-10 08:36:00.305234 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:36:00.353732 | orchestrator | 2025-02-10 08:36:00.353808 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-02-10 08:36:00.353827 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:36:00.396861 | orchestrator | 2025-02-10 08:36:00.396916 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-02-10 08:36:00.396932 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:36:01.333253 | orchestrator | 2025-02-10 08:36:01.333340 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-02-10 08:36:01.333367 | orchestrator | changed: [testbed-manager] 2025-02-10 08:38:26.615009 | orchestrator | 2025-02-10 08:38:26.615241 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-02-10 08:38:26.615294 | orchestrator | changed: [testbed-manager] 2025-02-10 08:39:44.221187 | orchestrator | 2025-02-10 08:39:44.221259 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-02-10 08:39:44.221289 | orchestrator | changed: [testbed-manager] 2025-02-10 08:40:04.714833 | orchestrator | 2025-02-10 08:40:04.714920 | orchestrator | TASK [Install required packages] *********************************************** 2025-02-10 08:40:04.714944 | orchestrator | changed: [testbed-manager] 2025-02-10 08:40:13.449403 | orchestrator | 2025-02-10 08:40:13.449524 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-02-10 08:40:13.449593 | orchestrator | changed: [testbed-manager] 2025-02-10 08:40:13.500975 | orchestrator | 2025-02-10 08:40:13.501082 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-02-10 08:40:13.501130 | orchestrator | ok: [testbed-manager] 2025-02-10 08:40:14.339030 | orchestrator | 2025-02-10 08:40:14.339129 | orchestrator | TASK [Get current user] ******************************************************** 2025-02-10 08:40:14.339165 | orchestrator | ok: [testbed-manager] 2025-02-10 08:40:15.077827 | orchestrator | 2025-02-10 08:40:15.077940 | orchestrator | TASK [Create venv directory] *************************************************** 2025-02-10 08:40:15.077987 | orchestrator | changed: [testbed-manager] 2025-02-10 08:40:21.676935 | orchestrator | 2025-02-10 08:40:21.677107 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-02-10 08:40:21.677128 | orchestrator | changed: [testbed-manager] 2025-02-10 08:40:27.745750 | orchestrator | 2025-02-10 08:40:27.745833 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-02-10 08:40:27.745865 | orchestrator | changed: [testbed-manager] 2025-02-10 08:40:30.364986 | orchestrator | 2025-02-10 08:40:30.365096 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-02-10 08:40:30.365132 | orchestrator | changed: [testbed-manager] 2025-02-10 08:40:32.197651 | orchestrator | 2025-02-10 08:40:32.197784 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-02-10 08:40:32.197842 | orchestrator | changed: [testbed-manager] 2025-02-10 08:40:33.337724 | orchestrator | 2025-02-10 08:40:33.337836 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-02-10 08:40:33.337876 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-02-10 08:40:33.385601 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-02-10 08:40:33.385702 | orchestrator | 2025-02-10 08:40:33.385723 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-02-10 08:40:33.385756 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-02-10 08:40:36.783561 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-02-10 08:40:36.783642 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-02-10 08:40:36.783656 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-02-10 08:40:36.783676 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-02-10 08:40:37.363449 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-02-10 08:40:37.363583 | orchestrator | 2025-02-10 08:40:37.363607 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-02-10 08:40:37.363640 | orchestrator | changed: [testbed-manager] 2025-02-10 08:40:57.615353 | orchestrator | 2025-02-10 08:40:57.615403 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-02-10 08:40:57.615418 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-02-10 08:40:59.948575 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-02-10 08:40:59.948686 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-02-10 08:40:59.948706 | orchestrator | 2025-02-10 08:40:59.948724 | orchestrator | TASK [Install local collections] *********************************************** 2025-02-10 08:40:59.948756 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-02-10 08:41:01.368931 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-02-10 08:41:01.369061 | orchestrator | 2025-02-10 08:41:01.369087 | orchestrator | PLAY [Create operator user] **************************************************** 2025-02-10 08:41:01.369103 | orchestrator | 2025-02-10 08:41:01.369118 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-10 08:41:01.369148 | orchestrator | ok: [testbed-manager] 2025-02-10 08:41:01.415862 | orchestrator | 2025-02-10 08:41:01.415929 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-02-10 08:41:01.415947 | orchestrator | ok: [testbed-manager] 2025-02-10 08:41:01.480386 | orchestrator | 2025-02-10 08:41:01.480483 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-02-10 08:41:01.480518 | orchestrator | ok: [testbed-manager] 2025-02-10 08:41:02.320305 | orchestrator | 2025-02-10 08:41:02.320365 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-02-10 08:41:02.320387 | orchestrator | changed: [testbed-manager] 2025-02-10 08:41:03.064318 | orchestrator | 2025-02-10 08:41:03.064369 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-02-10 08:41:03.064386 | orchestrator | changed: [testbed-manager] 2025-02-10 08:41:04.500161 | orchestrator | 2025-02-10 08:41:04.500278 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-02-10 08:41:04.500316 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-02-10 08:41:05.927153 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-02-10 08:41:05.927268 | orchestrator | 2025-02-10 08:41:05.927290 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-02-10 08:41:05.927324 | orchestrator | changed: [testbed-manager] 2025-02-10 08:41:07.699354 | orchestrator | 2025-02-10 08:41:07.699477 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-02-10 08:41:07.699516 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-02-10 08:41:08.271763 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-02-10 08:41:08.271876 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-02-10 08:41:08.271897 | orchestrator | 2025-02-10 08:41:08.271913 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-02-10 08:41:08.271947 | orchestrator | changed: [testbed-manager] 2025-02-10 08:41:08.341798 | orchestrator | 2025-02-10 08:41:08.341933 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-02-10 08:41:08.341973 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:41:09.212333 | orchestrator | 2025-02-10 08:41:09.212465 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-02-10 08:41:09.212523 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-10 08:41:09.250966 | orchestrator | changed: [testbed-manager] 2025-02-10 08:41:09.251077 | orchestrator | 2025-02-10 08:41:09.251097 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-02-10 08:41:09.251130 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:41:09.289911 | orchestrator | 2025-02-10 08:41:09.290055 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-02-10 08:41:09.290095 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:41:09.323082 | orchestrator | 2025-02-10 08:41:09.323171 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-02-10 08:41:09.323197 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:41:09.382990 | orchestrator | 2025-02-10 08:41:09.383101 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-02-10 08:41:09.383136 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:41:10.117092 | orchestrator | 2025-02-10 08:41:10.117196 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-02-10 08:41:10.117236 | orchestrator | ok: [testbed-manager] 2025-02-10 08:41:11.557557 | orchestrator | 2025-02-10 08:41:11.557608 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-02-10 08:41:11.557615 | orchestrator | 2025-02-10 08:41:11.557621 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-10 08:41:11.557634 | orchestrator | ok: [testbed-manager] 2025-02-10 08:41:12.544714 | orchestrator | 2025-02-10 08:41:12.544778 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-02-10 08:41:12.544801 | orchestrator | changed: [testbed-manager] 2025-02-10 08:41:12.683072 | orchestrator | 2025-02-10 08:41:12.683153 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 08:41:12.683161 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-02-10 08:41:12.683167 | orchestrator | 2025-02-10 08:41:12.813832 | orchestrator | changed 2025-02-10 08:41:12.824623 | 2025-02-10 08:41:12.824727 | TASK [Point out that the log in on the manager is now possible] 2025-02-10 08:41:12.876776 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-02-10 08:41:12.890223 | 2025-02-10 08:41:12.890373 | TASK [Point out that the following task takes some time and does not give any output] 2025-02-10 08:41:12.940555 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-02-10 08:41:12.951227 | 2025-02-10 08:41:12.951338 | TASK [Run manager part 1 + 2] 2025-02-10 08:41:13.827391 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-02-10 08:41:13.890779 | orchestrator | 2025-02-10 08:41:16.400635 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-02-10 08:41:16.400886 | orchestrator | 2025-02-10 08:41:16.400944 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-10 08:41:16.400985 | orchestrator | ok: [testbed-manager] 2025-02-10 08:41:16.446287 | orchestrator | 2025-02-10 08:41:16.446379 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-02-10 08:41:16.446412 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:41:16.485197 | orchestrator | 2025-02-10 08:41:16.485258 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-02-10 08:41:16.485274 | orchestrator | ok: [testbed-manager] 2025-02-10 08:41:16.519235 | orchestrator | 2025-02-10 08:41:16.519330 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-02-10 08:41:16.519364 | orchestrator | ok: [testbed-manager] 2025-02-10 08:41:16.606891 | orchestrator | 2025-02-10 08:41:16.606988 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-02-10 08:41:16.607023 | orchestrator | ok: [testbed-manager] 2025-02-10 08:41:16.675227 | orchestrator | 2025-02-10 08:41:16.675298 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-02-10 08:41:16.675315 | orchestrator | ok: [testbed-manager] 2025-02-10 08:41:16.729091 | orchestrator | 2025-02-10 08:41:16.729180 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-02-10 08:41:16.729204 | orchestrator | included: /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-02-10 08:41:17.444994 | orchestrator | 2025-02-10 08:41:17.445074 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-02-10 08:41:17.445093 | orchestrator | ok: [testbed-manager] 2025-02-10 08:41:17.486620 | orchestrator | 2025-02-10 08:41:17.486688 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-02-10 08:41:17.486707 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:41:18.944789 | orchestrator | 2025-02-10 08:41:18.944865 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-02-10 08:41:18.944893 | orchestrator | changed: [testbed-manager] 2025-02-10 08:41:19.530766 | orchestrator | 2025-02-10 08:41:19.530834 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-02-10 08:41:19.530854 | orchestrator | ok: [testbed-manager] 2025-02-10 08:41:20.668740 | orchestrator | 2025-02-10 08:41:20.668815 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-02-10 08:41:20.668839 | orchestrator | changed: [testbed-manager] 2025-02-10 08:41:33.654506 | orchestrator | 2025-02-10 08:41:33.654727 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-02-10 08:41:33.654746 | orchestrator | changed: [testbed-manager] 2025-02-10 08:41:34.386893 | orchestrator | 2025-02-10 08:41:34.387033 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-02-10 08:41:34.387091 | orchestrator | ok: [testbed-manager] 2025-02-10 08:41:34.441179 | orchestrator | 2025-02-10 08:41:34.441297 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-02-10 08:41:34.441339 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:41:35.446095 | orchestrator | 2025-02-10 08:41:35.446700 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-02-10 08:41:35.446748 | orchestrator | changed: [testbed-manager] 2025-02-10 08:41:36.447284 | orchestrator | 2025-02-10 08:41:36.447405 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-02-10 08:41:36.447442 | orchestrator | changed: [testbed-manager] 2025-02-10 08:41:37.053598 | orchestrator | 2025-02-10 08:41:37.053707 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-02-10 08:41:37.053743 | orchestrator | changed: [testbed-manager] 2025-02-10 08:41:37.100599 | orchestrator | 2025-02-10 08:41:37.100717 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-02-10 08:41:37.100749 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-02-10 08:41:39.573183 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-02-10 08:41:39.573288 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-02-10 08:41:39.573308 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-02-10 08:41:39.573336 | orchestrator | changed: [testbed-manager] 2025-02-10 08:41:48.751122 | orchestrator | 2025-02-10 08:41:48.751188 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-02-10 08:41:48.751207 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-02-10 08:41:49.822753 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-02-10 08:41:49.822874 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-02-10 08:41:49.822894 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-02-10 08:41:49.822911 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-02-10 08:41:49.822926 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-02-10 08:41:49.822941 | orchestrator | 2025-02-10 08:41:49.822957 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-02-10 08:41:49.823007 | orchestrator | changed: [testbed-manager] 2025-02-10 08:41:49.870685 | orchestrator | 2025-02-10 08:41:49.870797 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-02-10 08:41:49.870829 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:41:53.065182 | orchestrator | 2025-02-10 08:41:53.065267 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-02-10 08:41:53.065287 | orchestrator | changed: [testbed-manager] 2025-02-10 08:41:53.105731 | orchestrator | 2025-02-10 08:41:53.105819 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-02-10 08:41:53.105853 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:43:32.373601 | orchestrator | 2025-02-10 08:43:32.373702 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-02-10 08:43:32.373738 | orchestrator | changed: [testbed-manager] 2025-02-10 08:43:33.537410 | orchestrator | 2025-02-10 08:43:33.537467 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-02-10 08:43:33.537483 | orchestrator | ok: [testbed-manager] 2025-02-10 08:43:33.657654 | orchestrator | 2025-02-10 08:43:33.657875 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 08:43:33.657889 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-02-10 08:43:33.657895 | orchestrator | 2025-02-10 08:43:34.116294 | orchestrator | changed 2025-02-10 08:43:34.135481 | 2025-02-10 08:43:34.135619 | TASK [Reboot manager] 2025-02-10 08:43:35.680195 | orchestrator | changed 2025-02-10 08:43:35.698257 | 2025-02-10 08:43:35.698463 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-02-10 08:43:50.312786 | orchestrator | ok 2025-02-10 08:43:50.324645 | 2025-02-10 08:43:50.324777 | TASK [Wait a little longer for the manager so that everything is ready] 2025-02-10 08:44:50.374096 | orchestrator | ok 2025-02-10 08:44:50.386625 | 2025-02-10 08:44:50.386748 | TASK [Deploy manager + bootstrap nodes] 2025-02-10 08:44:52.880494 | orchestrator | 2025-02-10 08:44:52.884312 | orchestrator | # DEPLOY MANAGER 2025-02-10 08:44:52.884393 | orchestrator | 2025-02-10 08:44:52.884425 | orchestrator | + set -e 2025-02-10 08:44:52.884487 | orchestrator | + echo 2025-02-10 08:44:52.884508 | orchestrator | + echo '# DEPLOY MANAGER' 2025-02-10 08:44:52.884569 | orchestrator | + echo 2025-02-10 08:44:52.884596 | orchestrator | + cat /opt/manager-vars.sh 2025-02-10 08:44:52.884635 | orchestrator | export NUMBER_OF_NODES=6 2025-02-10 08:44:52.884802 | orchestrator | 2025-02-10 08:44:52.884829 | orchestrator | export CEPH_VERSION=quincy 2025-02-10 08:44:52.884844 | orchestrator | export CONFIGURATION_VERSION=main 2025-02-10 08:44:52.884860 | orchestrator | export MANAGER_VERSION=8.1.0 2025-02-10 08:44:52.884875 | orchestrator | export OPENSTACK_VERSION=2024.1 2025-02-10 08:44:52.884890 | orchestrator | 2025-02-10 08:44:52.884906 | orchestrator | export ARA=false 2025-02-10 08:44:52.884921 | orchestrator | export TEMPEST=false 2025-02-10 08:44:52.884936 | orchestrator | export IS_ZUUL=true 2025-02-10 08:44:52.884951 | orchestrator | 2025-02-10 08:44:52.884966 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.162 2025-02-10 08:44:52.884982 | orchestrator | export EXTERNAL_API=false 2025-02-10 08:44:52.884997 | orchestrator | 2025-02-10 08:44:52.885012 | orchestrator | export IMAGE_USER=ubuntu 2025-02-10 08:44:52.885027 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-02-10 08:44:52.885044 | orchestrator | 2025-02-10 08:44:52.885058 | orchestrator | export CEPH_STACK=ceph-ansible 2025-02-10 08:44:52.885081 | orchestrator | 2025-02-10 08:44:52.885973 | orchestrator | + echo 2025-02-10 08:44:52.886008 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-02-10 08:44:52.886145 | orchestrator | ++ export INTERACTIVE=false 2025-02-10 08:44:52.886165 | orchestrator | ++ INTERACTIVE=false 2025-02-10 08:44:52.886179 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-02-10 08:44:52.886252 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-02-10 08:44:52.886270 | orchestrator | + source /opt/manager-vars.sh 2025-02-10 08:44:52.886285 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-02-10 08:44:52.886299 | orchestrator | ++ NUMBER_OF_NODES=6 2025-02-10 08:44:52.886313 | orchestrator | ++ export CEPH_VERSION=quincy 2025-02-10 08:44:52.886327 | orchestrator | ++ CEPH_VERSION=quincy 2025-02-10 08:44:52.886373 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-02-10 08:44:52.886446 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-02-10 08:44:52.886482 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-02-10 08:44:52.886506 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-02-10 08:44:52.886556 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-02-10 08:44:52.886571 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-02-10 08:44:52.886586 | orchestrator | ++ export ARA=false 2025-02-10 08:44:52.886600 | orchestrator | ++ ARA=false 2025-02-10 08:44:52.886614 | orchestrator | ++ export TEMPEST=false 2025-02-10 08:44:52.886628 | orchestrator | ++ TEMPEST=false 2025-02-10 08:44:52.886642 | orchestrator | ++ export IS_ZUUL=true 2025-02-10 08:44:52.886656 | orchestrator | ++ IS_ZUUL=true 2025-02-10 08:44:52.886671 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.162 2025-02-10 08:44:52.886685 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.162 2025-02-10 08:44:52.886707 | orchestrator | ++ export EXTERNAL_API=false 2025-02-10 08:44:52.886721 | orchestrator | ++ EXTERNAL_API=false 2025-02-10 08:44:52.886735 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-02-10 08:44:52.886749 | orchestrator | ++ IMAGE_USER=ubuntu 2025-02-10 08:44:52.886764 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-02-10 08:44:52.886778 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-02-10 08:44:52.886800 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-02-10 08:44:52.943245 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-02-10 08:44:52.943376 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-02-10 08:44:52.943440 | orchestrator | + docker version 2025-02-10 08:44:53.202360 | orchestrator | Client: Docker Engine - Community 2025-02-10 08:44:53.205494 | orchestrator | Version: 26.1.4 2025-02-10 08:44:53.205622 | orchestrator | API version: 1.45 2025-02-10 08:44:53.205645 | orchestrator | Go version: go1.21.11 2025-02-10 08:44:53.205661 | orchestrator | Git commit: 5650f9b 2025-02-10 08:44:53.205676 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-02-10 08:44:53.205692 | orchestrator | OS/Arch: linux/amd64 2025-02-10 08:44:53.205707 | orchestrator | Context: default 2025-02-10 08:44:53.205722 | orchestrator | 2025-02-10 08:44:53.205737 | orchestrator | Server: Docker Engine - Community 2025-02-10 08:44:53.205751 | orchestrator | Engine: 2025-02-10 08:44:53.205766 | orchestrator | Version: 26.1.4 2025-02-10 08:44:53.205780 | orchestrator | API version: 1.45 (minimum version 1.24) 2025-02-10 08:44:53.205794 | orchestrator | Go version: go1.21.11 2025-02-10 08:44:53.205820 | orchestrator | Git commit: de5c9cf 2025-02-10 08:44:53.205865 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-02-10 08:44:53.205880 | orchestrator | OS/Arch: linux/amd64 2025-02-10 08:44:53.205894 | orchestrator | Experimental: false 2025-02-10 08:44:53.205908 | orchestrator | containerd: 2025-02-10 08:44:53.205923 | orchestrator | Version: 1.7.25 2025-02-10 08:44:53.205937 | orchestrator | GitCommit: bcc810d6b9066471b0b6fa75f557a15a1cbf31bb 2025-02-10 08:44:53.205951 | orchestrator | runc: 2025-02-10 08:44:53.205965 | orchestrator | Version: 1.2.4 2025-02-10 08:44:53.205980 | orchestrator | GitCommit: v1.2.4-0-g6c52b3f 2025-02-10 08:44:53.205994 | orchestrator | docker-init: 2025-02-10 08:44:53.206008 | orchestrator | Version: 0.19.0 2025-02-10 08:44:53.206081 | orchestrator | GitCommit: de40ad0 2025-02-10 08:44:53.206112 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-02-10 08:44:53.214858 | orchestrator | + set -e 2025-02-10 08:44:53.214925 | orchestrator | + source /opt/manager-vars.sh 2025-02-10 08:44:53.214954 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-02-10 08:44:53.214969 | orchestrator | ++ NUMBER_OF_NODES=6 2025-02-10 08:44:53.214984 | orchestrator | ++ export CEPH_VERSION=quincy 2025-02-10 08:44:53.214998 | orchestrator | ++ CEPH_VERSION=quincy 2025-02-10 08:44:53.215012 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-02-10 08:44:53.215028 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-02-10 08:44:53.215042 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-02-10 08:44:53.215057 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-02-10 08:44:53.215071 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-02-10 08:44:53.215084 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-02-10 08:44:53.215098 | orchestrator | ++ export ARA=false 2025-02-10 08:44:53.215112 | orchestrator | ++ ARA=false 2025-02-10 08:44:53.215127 | orchestrator | ++ export TEMPEST=false 2025-02-10 08:44:53.215140 | orchestrator | ++ TEMPEST=false 2025-02-10 08:44:53.215154 | orchestrator | ++ export IS_ZUUL=true 2025-02-10 08:44:53.215168 | orchestrator | ++ IS_ZUUL=true 2025-02-10 08:44:53.215183 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.162 2025-02-10 08:44:53.215197 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.162 2025-02-10 08:44:53.215211 | orchestrator | ++ export EXTERNAL_API=false 2025-02-10 08:44:53.215224 | orchestrator | ++ EXTERNAL_API=false 2025-02-10 08:44:53.215238 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-02-10 08:44:53.215256 | orchestrator | ++ IMAGE_USER=ubuntu 2025-02-10 08:44:53.215270 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-02-10 08:44:53.215289 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-02-10 08:44:53.215304 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-02-10 08:44:53.215318 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-02-10 08:44:53.215332 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-02-10 08:44:53.215346 | orchestrator | ++ export INTERACTIVE=false 2025-02-10 08:44:53.215359 | orchestrator | ++ INTERACTIVE=false 2025-02-10 08:44:53.215373 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-02-10 08:44:53.215388 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-02-10 08:44:53.215406 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-02-10 08:44:53.220332 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 8.1.0 2025-02-10 08:44:53.220366 | orchestrator | + set -e 2025-02-10 08:44:53.226998 | orchestrator | + VERSION=8.1.0 2025-02-10 08:44:53.227054 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 8.1.0/g' /opt/configuration/environments/manager/configuration.yml 2025-02-10 08:44:53.227092 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-02-10 08:44:53.232216 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-02-10 08:44:53.232263 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-02-10 08:44:53.236384 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-02-10 08:44:53.244687 | orchestrator | /opt/configuration ~ 2025-02-10 08:44:53.247866 | orchestrator | + set -e 2025-02-10 08:44:53.247903 | orchestrator | + pushd /opt/configuration 2025-02-10 08:44:53.247920 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-02-10 08:44:53.247942 | orchestrator | + source /opt/venv/bin/activate 2025-02-10 08:44:53.248825 | orchestrator | ++ deactivate nondestructive 2025-02-10 08:44:53.248854 | orchestrator | ++ '[' -n '' ']' 2025-02-10 08:44:53.249032 | orchestrator | ++ '[' -n '' ']' 2025-02-10 08:44:53.249055 | orchestrator | ++ hash -r 2025-02-10 08:44:53.249074 | orchestrator | ++ '[' -n '' ']' 2025-02-10 08:44:53.249322 | orchestrator | ++ unset VIRTUAL_ENV 2025-02-10 08:44:53.249368 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-02-10 08:44:53.249395 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-02-10 08:44:53.249465 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-02-10 08:44:53.249516 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-02-10 08:44:53.249592 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-02-10 08:44:53.249633 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-02-10 08:44:53.249664 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-02-10 08:44:53.249680 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-02-10 08:44:53.249694 | orchestrator | ++ export PATH 2025-02-10 08:44:53.249709 | orchestrator | ++ '[' -n '' ']' 2025-02-10 08:44:53.249727 | orchestrator | ++ '[' -z '' ']' 2025-02-10 08:44:53.249909 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-02-10 08:44:53.249926 | orchestrator | ++ PS1='(venv) ' 2025-02-10 08:44:53.249941 | orchestrator | ++ export PS1 2025-02-10 08:44:53.249955 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-02-10 08:44:53.249970 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-02-10 08:44:53.249984 | orchestrator | ++ hash -r 2025-02-10 08:44:53.250003 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-02-10 08:44:54.371319 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-02-10 08:44:54.372110 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.3) 2025-02-10 08:44:54.373554 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.5) 2025-02-10 08:44:54.374904 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-02-10 08:44:54.375856 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (24.2) 2025-02-10 08:44:54.386145 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.1.8) 2025-02-10 08:44:54.387510 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-02-10 08:44:54.388556 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-02-10 08:44:54.389790 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.1) 2025-02-10 08:44:54.421169 | orchestrator | Requirement already satisfied: charset-normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.1) 2025-02-10 08:44:54.422610 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-02-10 08:44:54.423906 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.3.0) 2025-02-10 08:44:54.425265 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.1.31) 2025-02-10 08:44:54.429217 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-02-10 08:44:54.636694 | orchestrator | ++ which gilt 2025-02-10 08:44:54.642058 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-02-10 08:44:54.911048 | orchestrator | + /opt/venv/bin/gilt overlay 2025-02-10 08:44:54.911241 | orchestrator | osism.cfg-generics: 2025-02-10 08:44:56.432235 | orchestrator | - cloning osism.cfg-generics to /home/dragon/.gilt/clone/github.com/osism.cfg-generics 2025-02-10 08:44:56.432425 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-02-10 08:44:57.390995 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-02-10 08:44:57.391159 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-02-10 08:44:57.391182 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-02-10 08:44:57.391226 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-02-10 08:44:57.402366 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-02-10 08:44:57.739359 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-02-10 08:44:57.798396 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-02-10 08:44:57.798810 | orchestrator | + deactivate 2025-02-10 08:44:57.798861 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-02-10 08:44:57.798876 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-02-10 08:44:57.798888 | orchestrator | + export PATH 2025-02-10 08:44:57.798900 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-02-10 08:44:57.798911 | orchestrator | + '[' -n '' ']' 2025-02-10 08:44:57.798923 | orchestrator | + hash -r 2025-02-10 08:44:57.798934 | orchestrator | + '[' -n '' ']' 2025-02-10 08:44:57.798946 | orchestrator | + unset VIRTUAL_ENV 2025-02-10 08:44:57.798957 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-02-10 08:44:57.798969 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-02-10 08:44:57.798980 | orchestrator | + unset -f deactivate 2025-02-10 08:44:57.799002 | orchestrator | ~ 2025-02-10 08:44:57.801053 | orchestrator | + popd 2025-02-10 08:44:57.801149 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-02-10 08:44:57.801732 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-02-10 08:44:57.801756 | orchestrator | ++ semver 8.1.0 7.0.0 2025-02-10 08:44:57.875730 | orchestrator | + [[ 1 -ge 0 ]] 2025-02-10 08:44:57.919544 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-02-10 08:44:57.919708 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-02-10 08:44:57.919752 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-02-10 08:44:59.243332 | orchestrator | + source /opt/venv/bin/activate 2025-02-10 08:44:59.243480 | orchestrator | ++ deactivate nondestructive 2025-02-10 08:44:59.243501 | orchestrator | ++ '[' -n '' ']' 2025-02-10 08:44:59.243579 | orchestrator | ++ '[' -n '' ']' 2025-02-10 08:44:59.243596 | orchestrator | ++ hash -r 2025-02-10 08:44:59.243611 | orchestrator | ++ '[' -n '' ']' 2025-02-10 08:44:59.243625 | orchestrator | ++ unset VIRTUAL_ENV 2025-02-10 08:44:59.243643 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-02-10 08:44:59.243667 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-02-10 08:44:59.243693 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-02-10 08:44:59.243716 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-02-10 08:44:59.243739 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-02-10 08:44:59.243754 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-02-10 08:44:59.243769 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-02-10 08:44:59.243784 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-02-10 08:44:59.243798 | orchestrator | ++ export PATH 2025-02-10 08:44:59.243813 | orchestrator | ++ '[' -n '' ']' 2025-02-10 08:44:59.243827 | orchestrator | ++ '[' -z '' ']' 2025-02-10 08:44:59.243841 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-02-10 08:44:59.243859 | orchestrator | ++ PS1='(venv) ' 2025-02-10 08:44:59.243883 | orchestrator | ++ export PS1 2025-02-10 08:44:59.243904 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-02-10 08:44:59.243926 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-02-10 08:44:59.243953 | orchestrator | ++ hash -r 2025-02-10 08:44:59.243976 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-02-10 08:44:59.244022 | orchestrator | 2025-02-10 08:44:59.876338 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-02-10 08:44:59.876495 | orchestrator | 2025-02-10 08:44:59.876559 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-02-10 08:44:59.876597 | orchestrator | ok: [testbed-manager] 2025-02-10 08:45:00.975702 | orchestrator | 2025-02-10 08:45:00.975861 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-02-10 08:45:00.975909 | orchestrator | changed: [testbed-manager] 2025-02-10 08:45:03.396452 | orchestrator | 2025-02-10 08:45:03.396669 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-02-10 08:45:03.396697 | orchestrator | 2025-02-10 08:45:03.396714 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-10 08:45:03.396749 | orchestrator | ok: [testbed-manager] 2025-02-10 08:45:09.073421 | orchestrator | 2025-02-10 08:45:09.073624 | orchestrator | TASK [Pull images] ************************************************************* 2025-02-10 08:45:09.073698 | orchestrator | changed: [testbed-manager] => (item=quay.io/osism/ara-server:1.7.2) 2025-02-10 08:46:26.153304 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/mariadb:11.6.2) 2025-02-10 08:46:26.153465 | orchestrator | changed: [testbed-manager] => (item=quay.io/osism/ceph-ansible:8.1.0) 2025-02-10 08:46:26.153485 | orchestrator | changed: [testbed-manager] => (item=quay.io/osism/inventory-reconciler:8.1.0) 2025-02-10 08:46:26.153499 | orchestrator | changed: [testbed-manager] => (item=quay.io/osism/kolla-ansible:8.1.0) 2025-02-10 08:46:26.153566 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/redis:7.4.1-alpine) 2025-02-10 08:46:26.153591 | orchestrator | changed: [testbed-manager] => (item=quay.io/osism/netbox:v4.1.7) 2025-02-10 08:46:26.153611 | orchestrator | changed: [testbed-manager] => (item=quay.io/osism/osism-ansible:8.1.0) 2025-02-10 08:46:26.153632 | orchestrator | changed: [testbed-manager] => (item=quay.io/osism/osism:0.20241219.2) 2025-02-10 08:46:26.153654 | orchestrator | changed: [testbed-manager] => (item=quay.io/osism/osism-netbox:0.20241219.2) 2025-02-10 08:46:26.153676 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/postgres:16.6-alpine) 2025-02-10 08:46:26.153690 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/traefik:v3.2.1) 2025-02-10 08:46:26.153704 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/hashicorp/vault:1.18.2) 2025-02-10 08:46:26.153725 | orchestrator | 2025-02-10 08:46:26.153746 | orchestrator | TASK [Check status] ************************************************************ 2025-02-10 08:46:26.153791 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-02-10 08:46:26.153812 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-02-10 08:46:26.153834 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (118 retries left). 2025-02-10 08:46:26.153859 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j391677899871.1581', 'results_file': '/home/dragon/.ansible_async/j391677899871.1581', 'changed': True, 'item': 'quay.io/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-02-10 08:46:26.153892 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-02-10 08:46:26.153919 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j197475497373.1606', 'results_file': '/home/dragon/.ansible_async/j197475497373.1606', 'changed': True, 'item': 'index.docker.io/library/mariadb:11.6.2', 'ansible_loop_var': 'item'}) 2025-02-10 08:46:26.153947 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-02-10 08:46:26.153967 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j514195405039.1631', 'results_file': '/home/dragon/.ansible_async/j514195405039.1631', 'changed': True, 'item': 'quay.io/osism/ceph-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-02-10 08:46:26.153989 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j462184183487.1663', 'results_file': '/home/dragon/.ansible_async/j462184183487.1663', 'changed': True, 'item': 'quay.io/osism/inventory-reconciler:8.1.0', 'ansible_loop_var': 'item'}) 2025-02-10 08:46:26.154011 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-02-10 08:46:26.154110 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j975397972650.1695', 'results_file': '/home/dragon/.ansible_async/j975397972650.1695', 'changed': True, 'item': 'quay.io/osism/kolla-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-02-10 08:46:26.154134 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j910385188617.1727', 'results_file': '/home/dragon/.ansible_async/j910385188617.1727', 'changed': True, 'item': 'index.docker.io/library/redis:7.4.1-alpine', 'ansible_loop_var': 'item'}) 2025-02-10 08:46:26.154163 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j723913818708.1760', 'results_file': '/home/dragon/.ansible_async/j723913818708.1760', 'changed': True, 'item': 'quay.io/osism/netbox:v4.1.7', 'ansible_loop_var': 'item'}) 2025-02-10 08:46:26.154218 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-02-10 08:46:26.154242 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j767705983937.1791', 'results_file': '/home/dragon/.ansible_async/j767705983937.1791', 'changed': True, 'item': 'quay.io/osism/osism-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-02-10 08:46:26.154257 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j782345141853.1823', 'results_file': '/home/dragon/.ansible_async/j782345141853.1823', 'changed': True, 'item': 'quay.io/osism/osism:0.20241219.2', 'ansible_loop_var': 'item'}) 2025-02-10 08:46:26.154269 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j611540150887.1855', 'results_file': '/home/dragon/.ansible_async/j611540150887.1855', 'changed': True, 'item': 'quay.io/osism/osism-netbox:0.20241219.2', 'ansible_loop_var': 'item'}) 2025-02-10 08:46:26.154282 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j240623535343.1890', 'results_file': '/home/dragon/.ansible_async/j240623535343.1890', 'changed': True, 'item': 'index.docker.io/library/postgres:16.6-alpine', 'ansible_loop_var': 'item'}) 2025-02-10 08:46:26.154295 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j717906595161.1922', 'results_file': '/home/dragon/.ansible_async/j717906595161.1922', 'changed': True, 'item': 'index.docker.io/library/traefik:v3.2.1', 'ansible_loop_var': 'item'}) 2025-02-10 08:46:26.154331 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j181724304136.1983', 'results_file': '/home/dragon/.ansible_async/j181724304136.1983', 'changed': True, 'item': 'index.docker.io/hashicorp/vault:1.18.2', 'ansible_loop_var': 'item'}) 2025-02-10 08:46:26.202238 | orchestrator | 2025-02-10 08:46:26.202416 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-02-10 08:46:26.202474 | orchestrator | ok: [testbed-manager] 2025-02-10 08:46:26.687176 | orchestrator | 2025-02-10 08:46:26.687320 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-02-10 08:46:26.687360 | orchestrator | changed: [testbed-manager] 2025-02-10 08:46:27.029026 | orchestrator | 2025-02-10 08:46:27.029164 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-02-10 08:46:27.029207 | orchestrator | changed: [testbed-manager] 2025-02-10 08:46:27.409779 | orchestrator | 2025-02-10 08:46:27.409919 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-02-10 08:46:27.409974 | orchestrator | changed: [testbed-manager] 2025-02-10 08:46:27.464026 | orchestrator | 2025-02-10 08:46:27.464175 | orchestrator | TASK [Do not use Nexus for Ceph on CentOS] ************************************* 2025-02-10 08:46:27.464218 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:46:27.529598 | orchestrator | 2025-02-10 08:46:27.529769 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-02-10 08:46:27.529825 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:46:27.873661 | orchestrator | 2025-02-10 08:46:27.873842 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-02-10 08:46:27.873904 | orchestrator | ok: [testbed-manager] 2025-02-10 08:46:28.048792 | orchestrator | 2025-02-10 08:46:28.048936 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-02-10 08:46:28.048976 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:46:29.902625 | orchestrator | 2025-02-10 08:46:29.902772 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-02-10 08:46:29.902794 | orchestrator | 2025-02-10 08:46:29.902810 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-10 08:46:29.902844 | orchestrator | ok: [testbed-manager] 2025-02-10 08:46:30.129476 | orchestrator | 2025-02-10 08:46:30.129690 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-02-10 08:46:30.129794 | orchestrator | 2025-02-10 08:46:30.246256 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-02-10 08:46:30.246419 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-02-10 08:46:31.372095 | orchestrator | 2025-02-10 08:46:31.372253 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-02-10 08:46:31.372296 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-02-10 08:46:33.245994 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-02-10 08:46:33.246245 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-02-10 08:46:33.246266 | orchestrator | 2025-02-10 08:46:33.246283 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-02-10 08:46:33.246320 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-02-10 08:46:33.968436 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-02-10 08:46:33.968606 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-02-10 08:46:33.968623 | orchestrator | 2025-02-10 08:46:33.968636 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-02-10 08:46:33.968665 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-10 08:46:34.665770 | orchestrator | changed: [testbed-manager] 2025-02-10 08:46:34.665917 | orchestrator | 2025-02-10 08:46:34.665952 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-02-10 08:46:34.666003 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-10 08:46:34.754507 | orchestrator | changed: [testbed-manager] 2025-02-10 08:46:34.754738 | orchestrator | 2025-02-10 08:46:34.754770 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-02-10 08:46:34.754820 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:46:35.158294 | orchestrator | 2025-02-10 08:46:35.158460 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-02-10 08:46:35.158503 | orchestrator | ok: [testbed-manager] 2025-02-10 08:46:35.268216 | orchestrator | 2025-02-10 08:46:35.268369 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-02-10 08:46:35.268411 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-02-10 08:46:36.295066 | orchestrator | 2025-02-10 08:46:36.295183 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-02-10 08:46:36.295220 | orchestrator | changed: [testbed-manager] 2025-02-10 08:46:37.160185 | orchestrator | 2025-02-10 08:46:37.160307 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-02-10 08:46:37.160345 | orchestrator | changed: [testbed-manager] 2025-02-10 08:46:40.317852 | orchestrator | 2025-02-10 08:46:40.318009 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-02-10 08:46:40.318112 | orchestrator | changed: [testbed-manager] 2025-02-10 08:46:40.634149 | orchestrator | 2025-02-10 08:46:40.634274 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-02-10 08:46:40.634306 | orchestrator | 2025-02-10 08:46:40.759873 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-02-10 08:46:40.760032 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-02-10 08:46:43.254474 | orchestrator | 2025-02-10 08:46:43.254724 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-02-10 08:46:43.254771 | orchestrator | ok: [testbed-manager] 2025-02-10 08:46:43.411352 | orchestrator | 2025-02-10 08:46:43.411597 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-02-10 08:46:43.411665 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-02-10 08:46:44.520467 | orchestrator | 2025-02-10 08:46:44.520671 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-02-10 08:46:44.520732 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-02-10 08:46:44.666220 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-02-10 08:46:44.666371 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-02-10 08:46:44.666392 | orchestrator | 2025-02-10 08:46:44.666408 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-02-10 08:46:44.666444 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-02-10 08:46:45.333485 | orchestrator | 2025-02-10 08:46:45.333713 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-02-10 08:46:45.333754 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-02-10 08:46:45.996046 | orchestrator | 2025-02-10 08:46:45.996192 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-02-10 08:46:45.996233 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-10 08:46:46.421800 | orchestrator | changed: [testbed-manager] 2025-02-10 08:46:46.421936 | orchestrator | 2025-02-10 08:46:46.421955 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-02-10 08:46:46.421988 | orchestrator | changed: [testbed-manager] 2025-02-10 08:46:46.778469 | orchestrator | 2025-02-10 08:46:46.778624 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-02-10 08:46:46.778651 | orchestrator | ok: [testbed-manager] 2025-02-10 08:46:46.846170 | orchestrator | 2025-02-10 08:46:46.846318 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-02-10 08:46:46.846374 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:46:47.519021 | orchestrator | 2025-02-10 08:46:47.519166 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-02-10 08:46:47.519208 | orchestrator | changed: [testbed-manager] 2025-02-10 08:46:47.630223 | orchestrator | 2025-02-10 08:46:47.630365 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-02-10 08:46:47.630423 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-02-10 08:46:48.401790 | orchestrator | 2025-02-10 08:46:48.401953 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-02-10 08:46:48.401995 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-02-10 08:46:49.087705 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-02-10 08:46:49.087809 | orchestrator | 2025-02-10 08:46:49.087820 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-02-10 08:46:49.087838 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-02-10 08:46:49.765881 | orchestrator | 2025-02-10 08:46:49.765992 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-02-10 08:46:49.766053 | orchestrator | changed: [testbed-manager] 2025-02-10 08:46:49.829420 | orchestrator | 2025-02-10 08:46:49.829602 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-02-10 08:46:49.829646 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:46:50.476312 | orchestrator | 2025-02-10 08:46:50.476462 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-02-10 08:46:50.476505 | orchestrator | changed: [testbed-manager] 2025-02-10 08:46:52.286499 | orchestrator | 2025-02-10 08:46:52.286633 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-02-10 08:46:52.286662 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-10 08:46:58.276447 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-10 08:46:58.276621 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-10 08:46:58.276637 | orchestrator | changed: [testbed-manager] 2025-02-10 08:46:58.276648 | orchestrator | 2025-02-10 08:46:58.276658 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-02-10 08:46:58.276685 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-02-10 08:46:58.962331 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-02-10 08:46:58.963424 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-02-10 08:46:58.963551 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-02-10 08:46:58.963584 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-02-10 08:46:58.963644 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-02-10 08:46:58.963660 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-02-10 08:46:58.963675 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-02-10 08:46:58.963691 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-02-10 08:46:58.963705 | orchestrator | changed: [testbed-manager] => (item=users) 2025-02-10 08:46:58.963719 | orchestrator | 2025-02-10 08:46:58.963735 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-02-10 08:46:58.963773 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-02-10 08:46:59.128759 | orchestrator | 2025-02-10 08:46:59.128902 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-02-10 08:46:59.128944 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-02-10 08:46:59.844589 | orchestrator | 2025-02-10 08:46:59.844726 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-02-10 08:46:59.844768 | orchestrator | changed: [testbed-manager] 2025-02-10 08:47:00.490981 | orchestrator | 2025-02-10 08:47:00.491077 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-02-10 08:47:00.491101 | orchestrator | ok: [testbed-manager] 2025-02-10 08:47:01.256928 | orchestrator | 2025-02-10 08:47:01.257060 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-02-10 08:47:01.257099 | orchestrator | changed: [testbed-manager] 2025-02-10 08:47:05.772226 | orchestrator | 2025-02-10 08:47:05.772408 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-02-10 08:47:05.772450 | orchestrator | changed: [testbed-manager] 2025-02-10 08:47:06.773336 | orchestrator | 2025-02-10 08:47:06.773479 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-02-10 08:47:06.773588 | orchestrator | ok: [testbed-manager] 2025-02-10 08:47:29.030219 | orchestrator | 2025-02-10 08:47:29.030403 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-02-10 08:47:29.030445 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-02-10 08:47:29.125000 | orchestrator | ok: [testbed-manager] 2025-02-10 08:47:29.125143 | orchestrator | 2025-02-10 08:47:29.125164 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-02-10 08:47:29.125202 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:47:29.176130 | orchestrator | 2025-02-10 08:47:29.176256 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-02-10 08:47:29.176274 | orchestrator | 2025-02-10 08:47:29.176289 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-02-10 08:47:29.176321 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:47:29.271719 | orchestrator | 2025-02-10 08:47:29.271856 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-02-10 08:47:29.271895 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-02-10 08:47:30.132004 | orchestrator | 2025-02-10 08:47:30.132144 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-02-10 08:47:30.132187 | orchestrator | ok: [testbed-manager] 2025-02-10 08:47:30.232023 | orchestrator | 2025-02-10 08:47:30.232199 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-02-10 08:47:30.232242 | orchestrator | ok: [testbed-manager] 2025-02-10 08:47:30.289008 | orchestrator | 2025-02-10 08:47:30.289139 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-02-10 08:47:30.289177 | orchestrator | ok: [testbed-manager] => { 2025-02-10 08:47:30.968455 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-02-10 08:47:30.968656 | orchestrator | } 2025-02-10 08:47:30.968678 | orchestrator | 2025-02-10 08:47:30.968694 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-02-10 08:47:30.968729 | orchestrator | ok: [testbed-manager] 2025-02-10 08:47:31.820858 | orchestrator | 2025-02-10 08:47:31.821016 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-02-10 08:47:31.821050 | orchestrator | ok: [testbed-manager] 2025-02-10 08:47:31.913960 | orchestrator | 2025-02-10 08:47:31.914147 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-02-10 08:47:31.914197 | orchestrator | ok: [testbed-manager] 2025-02-10 08:47:31.961074 | orchestrator | 2025-02-10 08:47:31.961187 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-02-10 08:47:31.961216 | orchestrator | ok: [testbed-manager] => { 2025-02-10 08:47:32.026789 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-02-10 08:47:32.026926 | orchestrator | } 2025-02-10 08:47:32.026943 | orchestrator | 2025-02-10 08:47:32.026957 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-02-10 08:47:32.026989 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:47:32.095713 | orchestrator | 2025-02-10 08:47:32.095835 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-02-10 08:47:32.095870 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:47:32.147256 | orchestrator | 2025-02-10 08:47:32.147378 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-02-10 08:47:32.147410 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:47:32.221619 | orchestrator | 2025-02-10 08:47:32.221781 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-02-10 08:47:32.221823 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:47:32.278314 | orchestrator | 2025-02-10 08:47:32.278446 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-02-10 08:47:32.278484 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:47:32.337281 | orchestrator | 2025-02-10 08:47:32.337406 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-02-10 08:47:32.337457 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:47:33.587415 | orchestrator | 2025-02-10 08:47:33.587622 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-02-10 08:47:33.587664 | orchestrator | changed: [testbed-manager] 2025-02-10 08:47:33.732225 | orchestrator | 2025-02-10 08:47:33.732353 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-02-10 08:47:33.732388 | orchestrator | ok: [testbed-manager] 2025-02-10 08:48:33.797378 | orchestrator | 2025-02-10 08:48:33.797569 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-02-10 08:48:33.797615 | orchestrator | Pausing for 60 seconds 2025-02-10 08:48:33.908152 | orchestrator | changed: [testbed-manager] 2025-02-10 08:48:33.908290 | orchestrator | 2025-02-10 08:48:33.908310 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-02-10 08:48:33.908349 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-02-10 08:52:45.809864 | orchestrator | 2025-02-10 08:52:45.810162 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-02-10 08:52:45.810212 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-02-10 08:52:47.906263 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-02-10 08:52:47.906401 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-02-10 08:52:47.906419 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-02-10 08:52:47.906434 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-02-10 08:52:47.906447 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-02-10 08:52:47.906460 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-02-10 08:52:47.906473 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-02-10 08:52:47.906547 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-02-10 08:52:47.906596 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-02-10 08:52:47.906609 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-02-10 08:52:47.906622 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-02-10 08:52:47.906635 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-02-10 08:52:47.906647 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-02-10 08:52:47.906659 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-02-10 08:52:47.906672 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-02-10 08:52:47.906684 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-02-10 08:52:47.906697 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (43 retries left). 2025-02-10 08:52:47.906710 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (42 retries left). 2025-02-10 08:52:47.906735 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (41 retries left). 2025-02-10 08:52:47.906748 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (40 retries left). 2025-02-10 08:52:47.906761 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (39 retries left). 2025-02-10 08:52:47.906774 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (38 retries left). 2025-02-10 08:52:47.906786 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (37 retries left). 2025-02-10 08:52:47.906801 | orchestrator | changed: [testbed-manager] 2025-02-10 08:52:47.906816 | orchestrator | 2025-02-10 08:52:47.906833 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-02-10 08:52:47.906847 | orchestrator | 2025-02-10 08:52:47.906861 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-10 08:52:47.906893 | orchestrator | ok: [testbed-manager] 2025-02-10 08:52:48.003056 | orchestrator | 2025-02-10 08:52:48.003208 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-02-10 08:52:48.003263 | orchestrator | 2025-02-10 08:52:48.070201 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-02-10 08:52:48.070381 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-02-10 08:52:49.589107 | orchestrator | 2025-02-10 08:52:49.589269 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-02-10 08:52:49.589311 | orchestrator | ok: [testbed-manager] 2025-02-10 08:52:49.661923 | orchestrator | 2025-02-10 08:52:49.662131 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-02-10 08:52:49.662172 | orchestrator | ok: [testbed-manager] 2025-02-10 08:52:49.756527 | orchestrator | 2025-02-10 08:52:49.756630 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-02-10 08:52:49.756653 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-02-10 08:52:52.475899 | orchestrator | 2025-02-10 08:52:52.476079 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-02-10 08:52:52.476135 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-02-10 08:52:53.111808 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-02-10 08:52:53.111948 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-02-10 08:52:53.111969 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-02-10 08:52:53.111985 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-02-10 08:52:53.112032 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-02-10 08:52:53.112047 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-02-10 08:52:53.112062 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-02-10 08:52:53.112076 | orchestrator | 2025-02-10 08:52:53.112091 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-02-10 08:52:53.112126 | orchestrator | changed: [testbed-manager] 2025-02-10 08:52:53.190847 | orchestrator | 2025-02-10 08:52:53.190979 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-02-10 08:52:53.191015 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-02-10 08:52:54.414814 | orchestrator | 2025-02-10 08:52:54.414994 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-02-10 08:52:54.415055 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-02-10 08:52:55.030172 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-02-10 08:52:55.030283 | orchestrator | 2025-02-10 08:52:55.030292 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-02-10 08:52:55.030312 | orchestrator | changed: [testbed-manager] 2025-02-10 08:52:55.107830 | orchestrator | 2025-02-10 08:52:55.107975 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-02-10 08:52:55.108039 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:52:55.164972 | orchestrator | 2025-02-10 08:52:55.165133 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-02-10 08:52:55.165174 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-02-10 08:52:56.574735 | orchestrator | 2025-02-10 08:52:56.574854 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-02-10 08:52:56.574879 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-10 08:52:57.217414 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-10 08:52:57.217581 | orchestrator | changed: [testbed-manager] 2025-02-10 08:52:57.217598 | orchestrator | 2025-02-10 08:52:57.217610 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-02-10 08:52:57.217638 | orchestrator | changed: [testbed-manager] 2025-02-10 08:52:57.326167 | orchestrator | 2025-02-10 08:52:57.326267 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-02-10 08:52:57.326300 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-02-10 08:52:57.990766 | orchestrator | 2025-02-10 08:52:57.990901 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-02-10 08:52:57.990936 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-10 08:52:58.616539 | orchestrator | changed: [testbed-manager] 2025-02-10 08:52:58.616683 | orchestrator | 2025-02-10 08:52:58.616703 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-02-10 08:52:58.616740 | orchestrator | changed: [testbed-manager] 2025-02-10 08:52:58.730762 | orchestrator | 2025-02-10 08:52:58.730900 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-02-10 08:52:58.730937 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-02-10 08:52:59.259292 | orchestrator | 2025-02-10 08:52:59.259438 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-02-10 08:52:59.259522 | orchestrator | changed: [testbed-manager] 2025-02-10 08:52:59.656289 | orchestrator | 2025-02-10 08:52:59.656431 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-02-10 08:52:59.656471 | orchestrator | changed: [testbed-manager] 2025-02-10 08:53:00.909100 | orchestrator | 2025-02-10 08:53:00.909250 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-02-10 08:53:00.909290 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-02-10 08:53:01.583109 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-02-10 08:53:01.583253 | orchestrator | 2025-02-10 08:53:01.583274 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-02-10 08:53:01.583337 | orchestrator | changed: [testbed-manager] 2025-02-10 08:53:01.928806 | orchestrator | 2025-02-10 08:53:01.928908 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-02-10 08:53:01.928928 | orchestrator | ok: [testbed-manager] 2025-02-10 08:53:01.983060 | orchestrator | 2025-02-10 08:53:01.983159 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-02-10 08:53:01.983179 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:53:02.610991 | orchestrator | 2025-02-10 08:53:02.611134 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-02-10 08:53:02.611190 | orchestrator | changed: [testbed-manager] 2025-02-10 08:53:02.751598 | orchestrator | 2025-02-10 08:53:02.751741 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-02-10 08:53:02.751780 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-02-10 08:53:02.809187 | orchestrator | 2025-02-10 08:53:02.809323 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-02-10 08:53:02.809360 | orchestrator | ok: [testbed-manager] 2025-02-10 08:53:04.861071 | orchestrator | 2025-02-10 08:53:04.861245 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-02-10 08:53:04.861302 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-02-10 08:53:05.561338 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-02-10 08:53:05.561511 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-02-10 08:53:05.561535 | orchestrator | 2025-02-10 08:53:05.561551 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-02-10 08:53:05.561586 | orchestrator | changed: [testbed-manager] 2025-02-10 08:53:05.651564 | orchestrator | 2025-02-10 08:53:05.651700 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-02-10 08:53:05.651738 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-02-10 08:53:05.710607 | orchestrator | 2025-02-10 08:53:05.710733 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-02-10 08:53:05.710769 | orchestrator | ok: [testbed-manager] 2025-02-10 08:53:06.425261 | orchestrator | 2025-02-10 08:53:06.425386 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-02-10 08:53:06.425420 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-02-10 08:53:06.518820 | orchestrator | 2025-02-10 08:53:06.518936 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-02-10 08:53:06.518966 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-02-10 08:53:07.246942 | orchestrator | 2025-02-10 08:53:07.247082 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-02-10 08:53:07.247119 | orchestrator | changed: [testbed-manager] 2025-02-10 08:53:07.873098 | orchestrator | 2025-02-10 08:53:07.873234 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-02-10 08:53:07.873272 | orchestrator | ok: [testbed-manager] 2025-02-10 08:53:07.920830 | orchestrator | 2025-02-10 08:53:07.920970 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-02-10 08:53:07.921009 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:53:07.978365 | orchestrator | 2025-02-10 08:53:07.978534 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-02-10 08:53:07.978571 | orchestrator | ok: [testbed-manager] 2025-02-10 08:53:08.835324 | orchestrator | 2025-02-10 08:53:08.835462 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-02-10 08:53:08.835548 | orchestrator | changed: [testbed-manager] 2025-02-10 08:53:37.099133 | orchestrator | 2025-02-10 08:53:37.099290 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-02-10 08:53:37.099332 | orchestrator | changed: [testbed-manager] 2025-02-10 08:53:37.768439 | orchestrator | 2025-02-10 08:53:37.768660 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-02-10 08:53:37.768750 | orchestrator | ok: [testbed-manager] 2025-02-10 08:53:41.633999 | orchestrator | 2025-02-10 08:53:41.634224 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-02-10 08:53:41.634264 | orchestrator | changed: [testbed-manager] 2025-02-10 08:53:41.692463 | orchestrator | 2025-02-10 08:53:41.692619 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-02-10 08:53:41.692655 | orchestrator | ok: [testbed-manager] 2025-02-10 08:53:41.759371 | orchestrator | 2025-02-10 08:53:41.759533 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-02-10 08:53:41.759545 | orchestrator | 2025-02-10 08:53:41.759553 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-02-10 08:53:41.759574 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:54:41.815175 | orchestrator | 2025-02-10 08:54:41.815315 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-02-10 08:54:41.815346 | orchestrator | Pausing for 60 seconds 2025-02-10 08:54:43.475979 | orchestrator | changed: [testbed-manager] 2025-02-10 08:54:43.476113 | orchestrator | 2025-02-10 08:54:43.476130 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-02-10 08:54:43.476161 | orchestrator | changed: [testbed-manager] 2025-02-10 08:55:04.581910 | orchestrator | 2025-02-10 08:55:04.582225 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-02-10 08:55:04.582279 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-02-10 08:55:08.958934 | orchestrator | changed: [testbed-manager] 2025-02-10 08:55:08.959068 | orchestrator | 2025-02-10 08:55:08.959087 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-02-10 08:55:08.959118 | orchestrator | changed: [testbed-manager] 2025-02-10 08:55:09.058415 | orchestrator | 2025-02-10 08:55:09.058601 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-02-10 08:55:09.058643 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-02-10 08:55:09.128363 | orchestrator | 2025-02-10 08:55:09.128539 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-02-10 08:55:09.128560 | orchestrator | 2025-02-10 08:55:09.128575 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-02-10 08:55:09.128608 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:55:09.213889 | orchestrator | 2025-02-10 08:55:09.214014 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 08:55:09.214103 | orchestrator | testbed-manager : ok=103 changed=55 unreachable=0 failed=0 skipped=19 rescued=0 ignored=0 2025-02-10 08:55:09.214119 | orchestrator | 2025-02-10 08:55:09.214152 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-02-10 08:55:09.218779 | orchestrator | + deactivate 2025-02-10 08:55:09.218810 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-02-10 08:55:09.218826 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-02-10 08:55:09.218841 | orchestrator | + export PATH 2025-02-10 08:55:09.218856 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-02-10 08:55:09.218870 | orchestrator | + '[' -n '' ']' 2025-02-10 08:55:09.218885 | orchestrator | + hash -r 2025-02-10 08:55:09.218900 | orchestrator | + '[' -n '' ']' 2025-02-10 08:55:09.218915 | orchestrator | + unset VIRTUAL_ENV 2025-02-10 08:55:09.218929 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-02-10 08:55:09.218944 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-02-10 08:55:09.218959 | orchestrator | + unset -f deactivate 2025-02-10 08:55:09.218974 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-02-10 08:55:09.218996 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-02-10 08:55:09.219412 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-02-10 08:55:09.219438 | orchestrator | + local max_attempts=60 2025-02-10 08:55:09.219499 | orchestrator | + local name=ceph-ansible 2025-02-10 08:55:09.219516 | orchestrator | + local attempt_num=1 2025-02-10 08:55:09.219537 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-02-10 08:55:09.248692 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-02-10 08:55:09.271581 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-02-10 08:55:09.271697 | orchestrator | + local max_attempts=60 2025-02-10 08:55:09.271714 | orchestrator | + local name=kolla-ansible 2025-02-10 08:55:09.271729 | orchestrator | + local attempt_num=1 2025-02-10 08:55:09.271743 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-02-10 08:55:09.271776 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-02-10 08:55:09.272609 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-02-10 08:55:09.272648 | orchestrator | + local max_attempts=60 2025-02-10 08:55:09.272673 | orchestrator | + local name=osism-ansible 2025-02-10 08:55:09.272690 | orchestrator | + local attempt_num=1 2025-02-10 08:55:09.272710 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-02-10 08:55:09.301565 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-02-10 08:55:09.981332 | orchestrator | + [[ true == \t\r\u\e ]] 2025-02-10 08:55:09.981501 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-02-10 08:55:09.981543 | orchestrator | ++ semver 8.1.0 8.0.0 2025-02-10 08:55:10.034555 | orchestrator | + [[ 1 -ge 0 ]] 2025-02-10 08:55:10.034779 | orchestrator | + wait_for_container_healthy 60 netbox-netbox-1 2025-02-10 08:55:10.034800 | orchestrator | + local max_attempts=60 2025-02-10 08:55:10.034827 | orchestrator | + local name=netbox-netbox-1 2025-02-10 08:55:10.034842 | orchestrator | + local attempt_num=1 2025-02-10 08:55:10.034861 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' netbox-netbox-1 2025-02-10 08:55:10.068282 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-02-10 08:55:10.074673 | orchestrator | + /opt/configuration/scripts/bootstrap/000-netbox.sh 2025-02-10 08:55:10.074817 | orchestrator | + set -e 2025-02-10 08:55:11.495634 | orchestrator | + osism netbox import 2025-02-10 08:55:11.495748 | orchestrator | 2025-02-10 08:55:11 | INFO  | Task c9810732-c8d3-4f60-8c6c-cc8f86059fdd is running. Wait. No more output. 2025-02-10 08:55:14.417699 | orchestrator | + osism netbox init 2025-02-10 08:55:15.673776 | orchestrator | 2025-02-10 08:55:15 | INFO  | Task 60ad3824-508e-495d-905e-ab54c000ba0a was prepared for execution. 2025-02-10 08:55:17.261028 | orchestrator | 2025-02-10 08:55:15 | INFO  | It takes a moment until task 60ad3824-508e-495d-905e-ab54c000ba0a has been started and output is visible here. 2025-02-10 08:55:17.261189 | orchestrator | 2025-02-10 08:55:17.261309 | orchestrator | PLAY [Wait for netbox service] ************************************************* 2025-02-10 08:55:17.261326 | orchestrator | 2025-02-10 08:55:17.261346 | orchestrator | TASK [Wait for netbox service] ************************************************* 2025-02-10 08:55:18.100958 | orchestrator | [WARNING]: Platform linux on host localhost is using the discovered Python 2025-02-10 08:55:18.102361 | orchestrator | interpreter at /usr/local/bin/python3.13, but future installation of another 2025-02-10 08:55:18.102405 | orchestrator | Python interpreter could change the meaning of that path. See 2025-02-10 08:55:18.102417 | orchestrator | https://docs.ansible.com/ansible- 2025-02-10 08:55:18.102434 | orchestrator | core/2.18/reference_appendices/interpreter_discovery.html for more information. 2025-02-10 08:55:18.107412 | orchestrator | ok: [localhost] 2025-02-10 08:55:18.110701 | orchestrator | 2025-02-10 08:55:19.768696 | orchestrator | PLAY [Manage sites and locations] ********************************************** 2025-02-10 08:55:19.768894 | orchestrator | 2025-02-10 08:55:19.768934 | orchestrator | TASK [Manage Discworld site] *************************************************** 2025-02-10 08:55:19.768966 | orchestrator | changed: [localhost] 2025-02-10 08:55:21.250334 | orchestrator | 2025-02-10 08:55:21.250632 | orchestrator | TASK [Manage Ankh-Morpork location] ******************************************** 2025-02-10 08:55:21.250689 | orchestrator | changed: [localhost] 2025-02-10 08:55:21.251341 | orchestrator | 2025-02-10 08:55:21.251380 | orchestrator | PLAY [Manage IP prefixes] ****************************************************** 2025-02-10 08:55:21.251678 | orchestrator | 2025-02-10 08:55:21.252280 | orchestrator | TASK [Manage 192.168.16.0/20] ************************************************** 2025-02-10 08:55:22.669258 | orchestrator | changed: [localhost] 2025-02-10 08:55:22.669792 | orchestrator | 2025-02-10 08:55:22.669820 | orchestrator | TASK [Manage 192.168.112.0/20] ************************************************* 2025-02-10 08:55:23.785591 | orchestrator | changed: [localhost] 2025-02-10 08:55:23.785807 | orchestrator | 2025-02-10 08:55:23.785837 | orchestrator | PLAY [Manage IP addresses] ***************************************************** 2025-02-10 08:55:23.786590 | orchestrator | 2025-02-10 08:55:23.786959 | orchestrator | TASK [Manage api.testbed.osism.xyz IP address] ********************************* 2025-02-10 08:55:25.086776 | orchestrator | changed: [localhost] 2025-02-10 08:55:26.206456 | orchestrator | 2025-02-10 08:55:26.206681 | orchestrator | TASK [Manage api-int.testbed.osism.xyz IP address] ***************************** 2025-02-10 08:55:26.206842 | orchestrator | changed: [localhost] 2025-02-10 08:55:26.207108 | orchestrator | 2025-02-10 08:55:26.207134 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 08:55:26.207150 | orchestrator | 2025-02-10 08:55:26 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 08:55:26.207167 | orchestrator | 2025-02-10 08:55:26 | INFO  | Please wait and do not abort execution. 2025-02-10 08:55:26.207183 | orchestrator | localhost : ok=7 changed=6 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 08:55:26.207205 | orchestrator | 2025-02-10 08:55:26.442559 | orchestrator | + osism netbox manage 1000 2025-02-10 08:55:27.755304 | orchestrator | 2025-02-10 08:55:27 | INFO  | Task 0b604b75-c39a-4a9f-8c3e-472dcaa31cca was prepared for execution. 2025-02-10 08:55:29.340034 | orchestrator | 2025-02-10 08:55:27 | INFO  | It takes a moment until task 0b604b75-c39a-4a9f-8c3e-472dcaa31cca has been started and output is visible here. 2025-02-10 08:55:29.340292 | orchestrator | 2025-02-10 08:55:29.341091 | orchestrator | PLAY [Manage rack 1000] ******************************************************** 2025-02-10 08:55:29.341121 | orchestrator | 2025-02-10 08:55:29.341144 | orchestrator | TASK [Manage rack 1000] ******************************************************** 2025-02-10 08:55:30.774525 | orchestrator | changed: [localhost] 2025-02-10 08:55:30.774786 | orchestrator | 2025-02-10 08:55:30.775163 | orchestrator | TASK [Manage testbed-switch-0] ************************************************* 2025-02-10 08:55:37.208220 | orchestrator | changed: [localhost] 2025-02-10 08:55:43.503046 | orchestrator | 2025-02-10 08:55:43.503209 | orchestrator | TASK [Manage testbed-switch-1] ************************************************* 2025-02-10 08:55:43.503267 | orchestrator | changed: [localhost] 2025-02-10 08:55:50.271282 | orchestrator | 2025-02-10 08:55:50.271449 | orchestrator | TASK [Manage testbed-switch-2] ************************************************* 2025-02-10 08:55:50.271524 | orchestrator | changed: [localhost] 2025-02-10 08:55:58.936530 | orchestrator | 2025-02-10 08:55:58.936710 | orchestrator | TASK [Manage testbed-manager] ************************************************** 2025-02-10 08:55:58.936763 | orchestrator | changed: [localhost] 2025-02-10 08:56:01.258585 | orchestrator | 2025-02-10 08:56:01.258816 | orchestrator | TASK [Manage testbed-node-0] *************************************************** 2025-02-10 08:56:01.258861 | orchestrator | changed: [localhost] 2025-02-10 08:56:03.603294 | orchestrator | 2025-02-10 08:56:03.603430 | orchestrator | TASK [Manage testbed-node-1] *************************************************** 2025-02-10 08:56:03.603543 | orchestrator | changed: [localhost] 2025-02-10 08:56:05.954246 | orchestrator | 2025-02-10 08:56:05.954377 | orchestrator | TASK [Manage testbed-node-2] *************************************************** 2025-02-10 08:56:05.954406 | orchestrator | changed: [localhost] 2025-02-10 08:56:05.955195 | orchestrator | 2025-02-10 08:56:05.955216 | orchestrator | TASK [Manage testbed-node-3] *************************************************** 2025-02-10 08:56:08.242720 | orchestrator | changed: [localhost] 2025-02-10 08:56:10.969909 | orchestrator | 2025-02-10 08:56:10.970129 | orchestrator | TASK [Manage testbed-node-4] *************************************************** 2025-02-10 08:56:10.970175 | orchestrator | changed: [localhost] 2025-02-10 08:56:10.970631 | orchestrator | 2025-02-10 08:56:10.971587 | orchestrator | TASK [Manage testbed-node-5] *************************************************** 2025-02-10 08:56:13.338348 | orchestrator | changed: [localhost] 2025-02-10 08:56:13.338609 | orchestrator | 2025-02-10 08:56:13.338644 | orchestrator | TASK [Manage testbed-node-6] *************************************************** 2025-02-10 08:56:15.596205 | orchestrator | changed: [localhost] 2025-02-10 08:56:15.596503 | orchestrator | 2025-02-10 08:56:15.597240 | orchestrator | TASK [Manage testbed-node-7] *************************************************** 2025-02-10 08:56:18.014134 | orchestrator | changed: [localhost] 2025-02-10 08:56:18.014334 | orchestrator | 2025-02-10 08:56:18.014390 | orchestrator | TASK [Manage testbed-node-8] *************************************************** 2025-02-10 08:56:20.819011 | orchestrator | changed: [localhost] 2025-02-10 08:56:20.820279 | orchestrator | 2025-02-10 08:56:20.821048 | orchestrator | TASK [Manage testbed-node-9] *************************************************** 2025-02-10 08:56:23.151793 | orchestrator | changed: [localhost] 2025-02-10 08:56:23.152051 | orchestrator | 2025-02-10 08:56:23.152087 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 08:56:23.152968 | orchestrator | 2025-02-10 08:56:23 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 08:56:23.153287 | orchestrator | 2025-02-10 08:56:23 | INFO  | Please wait and do not abort execution. 2025-02-10 08:56:23.153334 | orchestrator | localhost : ok=15 changed=15 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 08:56:23.154183 | orchestrator | 2025-02-10 08:56:23.479734 | orchestrator | + osism netbox connect 1000 --state a 2025-02-10 08:56:25.005613 | orchestrator | 2025-02-10 08:56:25 | INFO  | Task 42150d78-101d-4160-a4ee-4429e82e916f for device testbed-node-7 is running in background 2025-02-10 08:56:25.015010 | orchestrator | 2025-02-10 08:56:25 | INFO  | Task 1bef92da-ac3b-486b-935d-9b1bb544cedb for device testbed-node-8 is running in background 2025-02-10 08:56:25.019859 | orchestrator | 2025-02-10 08:56:25 | INFO  | Task bd53212e-6cf9-4f40-aaa3-bca1e7a9cccd for device testbed-switch-1 is running in background 2025-02-10 08:56:25.024415 | orchestrator | 2025-02-10 08:56:25 | INFO  | Task c901fff4-f32c-423b-8919-bf9da0ed68cc for device testbed-node-9 is running in background 2025-02-10 08:56:25.030779 | orchestrator | 2025-02-10 08:56:25 | INFO  | Task b72c0481-151d-42c4-bcc6-3511a250c902 for device testbed-node-3 is running in background 2025-02-10 08:56:25.034704 | orchestrator | 2025-02-10 08:56:25 | INFO  | Task 43a0b978-3ae5-4c8f-bceb-2704e4ec67f2 for device testbed-node-2 is running in background 2025-02-10 08:56:25.038106 | orchestrator | 2025-02-10 08:56:25 | INFO  | Task 20150db3-1f82-4d3e-813c-ca240e337a2f for device testbed-node-5 is running in background 2025-02-10 08:56:25.042757 | orchestrator | 2025-02-10 08:56:25 | INFO  | Task d9529ebd-9fa7-41e4-a930-8ac6129a8b64 for device testbed-node-4 is running in background 2025-02-10 08:56:25.045055 | orchestrator | 2025-02-10 08:56:25 | INFO  | Task b5f199dd-9bb8-427c-8118-965cff27c66e for device testbed-manager is running in background 2025-02-10 08:56:25.045100 | orchestrator | 2025-02-10 08:56:25 | INFO  | Task 3361a89f-c1dc-4522-84bf-03286da71fd5 for device testbed-switch-0 is running in background 2025-02-10 08:56:25.047741 | orchestrator | 2025-02-10 08:56:25 | INFO  | Task 5ec16a06-44aa-4356-a472-4eb7fdcac945 for device testbed-switch-2 is running in background 2025-02-10 08:56:25.049905 | orchestrator | 2025-02-10 08:56:25 | INFO  | Task b5c8f3d2-0142-491d-a0d9-3c63e5f8f68f for device testbed-node-6 is running in background 2025-02-10 08:56:25.052658 | orchestrator | 2025-02-10 08:56:25 | INFO  | Task 5869e6bc-e9c7-4c46-a8f4-c33b4ff46825 for device testbed-node-0 is running in background 2025-02-10 08:56:25.054791 | orchestrator | 2025-02-10 08:56:25 | INFO  | Task 31292133-4dde-4450-831e-cfbf592a4e1a for device testbed-node-1 is running in background 2025-02-10 08:56:25.293187 | orchestrator | 2025-02-10 08:56:25 | INFO  | Tasks are running in background. No more output. Check Flower for logs. 2025-02-10 08:56:25.293341 | orchestrator | + osism netbox disable --no-wait testbed-switch-0 2025-02-10 08:56:27.026316 | orchestrator | + osism netbox disable --no-wait testbed-switch-1 2025-02-10 08:56:28.778934 | orchestrator | + osism netbox disable --no-wait testbed-switch-2 2025-02-10 08:56:30.418626 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-02-10 08:56:30.612038 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-02-10 08:56:30.617389 | orchestrator | ceph-ansible quay.io/osism/ceph-ansible:8.1.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up 2 minutes (healthy) 2025-02-10 08:56:30.617433 | orchestrator | kolla-ansible quay.io/osism/kolla-ansible:8.1.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up 2 minutes (healthy) 2025-02-10 08:56:30.617443 | orchestrator | manager-api-1 quay.io/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2025-02-10 08:56:30.617454 | orchestrator | manager-ara-server-1 quay.io/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2025-02-10 08:56:30.617488 | orchestrator | manager-beat-1 quay.io/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" beat 2 minutes ago Up 2 minutes (healthy) 2025-02-10 08:56:30.617497 | orchestrator | manager-conductor-1 quay.io/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" conductor 2 minutes ago Up 2 minutes (healthy) 2025-02-10 08:56:30.617506 | orchestrator | manager-flower-1 quay.io/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" flower 2 minutes ago Up 2 minutes (healthy) 2025-02-10 08:56:30.617515 | orchestrator | manager-inventory_reconciler-1 quay.io/osism/inventory-reconciler:8.1.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up 2 minutes (healthy) 2025-02-10 08:56:30.617523 | orchestrator | manager-listener-1 quay.io/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" listener 2 minutes ago Up 2 minutes (healthy) 2025-02-10 08:56:30.617532 | orchestrator | manager-mariadb-1 index.docker.io/library/mariadb:11.6.2 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2025-02-10 08:56:30.617541 | orchestrator | manager-netbox-1 quay.io/osism/osism-netbox:0.20241219.2 "/usr/bin/tini -- os…" netbox 2 minutes ago Up 2 minutes (healthy) 2025-02-10 08:56:30.617550 | orchestrator | manager-openstack-1 quay.io/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" openstack 2 minutes ago Up 2 minutes (healthy) 2025-02-10 08:56:30.617559 | orchestrator | manager-redis-1 index.docker.io/library/redis:7.4.1-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2025-02-10 08:56:30.617568 | orchestrator | manager-watchdog-1 quay.io/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" watchdog 2 minutes ago Up 2 minutes (healthy) 2025-02-10 08:56:30.617576 | orchestrator | osism-ansible quay.io/osism/osism-ansible:8.1.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up 2 minutes (healthy) 2025-02-10 08:56:30.617585 | orchestrator | osism-kubernetes quay.io/osism/osism-kubernetes:8.1.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up 2 minutes (healthy) 2025-02-10 08:56:30.617593 | orchestrator | osismclient quay.io/osism/osism:0.20241219.2 "/usr/bin/tini -- sl…" osismclient 2 minutes ago Up 2 minutes (healthy) 2025-02-10 08:56:30.617612 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-02-10 08:56:30.752242 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-02-10 08:56:30.756851 | orchestrator | netbox-netbox-1 quay.io/osism/netbox:v4.1.7 "/usr/bin/tini -- /o…" netbox 9 minutes ago Up 8 minutes (healthy) 2025-02-10 08:56:30.756897 | orchestrator | netbox-netbox-worker-1 quay.io/osism/netbox:v4.1.7 "/opt/netbox/venv/bi…" netbox-worker 9 minutes ago Up 4 minutes (healthy) 2025-02-10 08:56:30.756943 | orchestrator | netbox-postgres-1 index.docker.io/library/postgres:16.6-alpine "docker-entrypoint.s…" postgres 9 minutes ago Up 8 minutes (healthy) 5432/tcp 2025-02-10 08:56:30.756977 | orchestrator | netbox-redis-1 index.docker.io/library/redis:7.4.2-alpine "docker-entrypoint.s…" redis 9 minutes ago Up 8 minutes (healthy) 6379/tcp 2025-02-10 08:56:30.757002 | orchestrator | ++ semver 8.1.0 7.0.0 2025-02-10 08:56:30.799005 | orchestrator | + [[ 1 -ge 0 ]] 2025-02-10 08:56:32.131704 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-02-10 08:56:32.131863 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-02-10 08:56:32.131951 | orchestrator | 2025-02-10 08:56:32 | INFO  | Task 16e844c7-2b55-4a66-95d4-5d7e04086ff1 (resolvconf) was prepared for execution. 2025-02-10 08:56:34.804262 | orchestrator | 2025-02-10 08:56:32 | INFO  | It takes a moment until task 16e844c7-2b55-4a66-95d4-5d7e04086ff1 (resolvconf) has been started and output is visible here. 2025-02-10 08:56:34.804420 | orchestrator | 2025-02-10 08:56:34.805733 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-02-10 08:56:34.805777 | orchestrator | 2025-02-10 08:56:34.805793 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-10 08:56:34.805816 | orchestrator | Monday 10 February 2025 08:56:34 +0000 (0:00:00.077) 0:00:00.077 ******* 2025-02-10 08:56:38.296614 | orchestrator | ok: [testbed-manager] 2025-02-10 08:56:38.297892 | orchestrator | 2025-02-10 08:56:38.297955 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-02-10 08:56:38.297984 | orchestrator | Monday 10 February 2025 08:56:38 +0000 (0:00:03.495) 0:00:03.572 ******* 2025-02-10 08:56:38.352243 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:56:38.354898 | orchestrator | 2025-02-10 08:56:38.355288 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-02-10 08:56:38.355621 | orchestrator | Monday 10 February 2025 08:56:38 +0000 (0:00:00.054) 0:00:03.627 ******* 2025-02-10 08:56:38.446571 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-02-10 08:56:38.446751 | orchestrator | 2025-02-10 08:56:38.446770 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-02-10 08:56:38.446788 | orchestrator | Monday 10 February 2025 08:56:38 +0000 (0:00:00.094) 0:00:03.722 ******* 2025-02-10 08:56:38.507320 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-02-10 08:56:38.507819 | orchestrator | 2025-02-10 08:56:38.507858 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-02-10 08:56:38.507883 | orchestrator | Monday 10 February 2025 08:56:38 +0000 (0:00:00.059) 0:00:03.781 ******* 2025-02-10 08:56:39.479217 | orchestrator | ok: [testbed-manager] 2025-02-10 08:56:39.524658 | orchestrator | 2025-02-10 08:56:39.524855 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-02-10 08:56:39.524878 | orchestrator | Monday 10 February 2025 08:56:39 +0000 (0:00:00.971) 0:00:04.753 ******* 2025-02-10 08:56:39.524900 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:56:39.992098 | orchestrator | 2025-02-10 08:56:39.992229 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-02-10 08:56:39.992247 | orchestrator | Monday 10 February 2025 08:56:39 +0000 (0:00:00.046) 0:00:04.800 ******* 2025-02-10 08:56:39.992278 | orchestrator | ok: [testbed-manager] 2025-02-10 08:56:40.062784 | orchestrator | 2025-02-10 08:56:40.062912 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-02-10 08:56:40.062932 | orchestrator | Monday 10 February 2025 08:56:39 +0000 (0:00:00.464) 0:00:05.264 ******* 2025-02-10 08:56:40.062969 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:56:40.064899 | orchestrator | 2025-02-10 08:56:40.067899 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-02-10 08:56:40.067953 | orchestrator | Monday 10 February 2025 08:56:40 +0000 (0:00:00.071) 0:00:05.336 ******* 2025-02-10 08:56:40.583955 | orchestrator | changed: [testbed-manager] 2025-02-10 08:56:40.584111 | orchestrator | 2025-02-10 08:56:40.584140 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-02-10 08:56:40.584173 | orchestrator | Monday 10 February 2025 08:56:40 +0000 (0:00:00.519) 0:00:05.855 ******* 2025-02-10 08:56:41.553893 | orchestrator | changed: [testbed-manager] 2025-02-10 08:56:41.554165 | orchestrator | 2025-02-10 08:56:41.554505 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-02-10 08:56:41.554681 | orchestrator | Monday 10 February 2025 08:56:41 +0000 (0:00:00.972) 0:00:06.828 ******* 2025-02-10 08:56:42.429597 | orchestrator | ok: [testbed-manager] 2025-02-10 08:56:42.499971 | orchestrator | 2025-02-10 08:56:42.500107 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-02-10 08:56:42.500127 | orchestrator | Monday 10 February 2025 08:56:42 +0000 (0:00:00.875) 0:00:07.703 ******* 2025-02-10 08:56:42.500162 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-02-10 08:56:42.500576 | orchestrator | 2025-02-10 08:56:42.500615 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-02-10 08:56:42.504861 | orchestrator | Monday 10 February 2025 08:56:42 +0000 (0:00:00.072) 0:00:07.776 ******* 2025-02-10 08:56:43.699584 | orchestrator | changed: [testbed-manager] 2025-02-10 08:56:43.700115 | orchestrator | 2025-02-10 08:56:43.700160 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 08:56:43.700186 | orchestrator | 2025-02-10 08:56:43 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 08:56:43.700212 | orchestrator | 2025-02-10 08:56:43 | INFO  | Please wait and do not abort execution. 2025-02-10 08:56:43.700245 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-10 08:56:43.700420 | orchestrator | 2025-02-10 08:56:43.700990 | orchestrator | Monday 10 February 2025 08:56:43 +0000 (0:00:01.197) 0:00:08.973 ******* 2025-02-10 08:56:43.701490 | orchestrator | =============================================================================== 2025-02-10 08:56:43.701878 | orchestrator | Gathering Facts --------------------------------------------------------- 3.50s 2025-02-10 08:56:43.702350 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.20s 2025-02-10 08:56:43.703041 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 0.97s 2025-02-10 08:56:43.703372 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 0.97s 2025-02-10 08:56:43.703766 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.88s 2025-02-10 08:56:43.704178 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.52s 2025-02-10 08:56:43.704617 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.46s 2025-02-10 08:56:43.704985 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-02-10 08:56:43.705385 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.07s 2025-02-10 08:56:43.709072 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2025-02-10 08:56:43.713122 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.06s 2025-02-10 08:56:43.713318 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.05s 2025-02-10 08:56:44.167167 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2025-02-10 08:56:44.167295 | orchestrator | + osism apply sshconfig 2025-02-10 08:56:45.855505 | orchestrator | 2025-02-10 08:56:45 | INFO  | Task fb8c16ad-cb98-4618-afb0-7cc71ff48e2a (sshconfig) was prepared for execution. 2025-02-10 08:56:48.991397 | orchestrator | 2025-02-10 08:56:45 | INFO  | It takes a moment until task fb8c16ad-cb98-4618-afb0-7cc71ff48e2a (sshconfig) has been started and output is visible here. 2025-02-10 08:56:48.991629 | orchestrator | 2025-02-10 08:56:48.991985 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-02-10 08:56:48.992016 | orchestrator | 2025-02-10 08:56:48.992034 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-02-10 08:56:48.992057 | orchestrator | Monday 10 February 2025 08:56:48 +0000 (0:00:00.094) 0:00:00.094 ******* 2025-02-10 08:56:49.459821 | orchestrator | ok: [testbed-manager] 2025-02-10 08:56:49.461688 | orchestrator | 2025-02-10 08:56:49.870341 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-02-10 08:56:49.870589 | orchestrator | Monday 10 February 2025 08:56:49 +0000 (0:00:00.471) 0:00:00.566 ******* 2025-02-10 08:56:49.870647 | orchestrator | changed: [testbed-manager] 2025-02-10 08:56:49.873956 | orchestrator | 2025-02-10 08:56:49.874195 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-02-10 08:56:49.874833 | orchestrator | Monday 10 February 2025 08:56:49 +0000 (0:00:00.410) 0:00:00.977 ******* 2025-02-10 08:56:54.482242 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-02-10 08:56:54.482931 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-02-10 08:56:54.483252 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-02-10 08:56:54.483296 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-02-10 08:56:54.483323 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-02-10 08:56:54.483510 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-02-10 08:56:54.483542 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-02-10 08:56:54.487114 | orchestrator | 2025-02-10 08:56:54.487182 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-02-10 08:56:54.487209 | orchestrator | Monday 10 February 2025 08:56:54 +0000 (0:00:04.609) 0:00:05.587 ******* 2025-02-10 08:56:54.551436 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:56:54.552922 | orchestrator | 2025-02-10 08:56:54.555958 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-02-10 08:56:55.043885 | orchestrator | Monday 10 February 2025 08:56:54 +0000 (0:00:00.069) 0:00:05.656 ******* 2025-02-10 08:56:55.044033 | orchestrator | changed: [testbed-manager] 2025-02-10 08:56:55.044711 | orchestrator | 2025-02-10 08:56:55.044926 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 08:56:55.045155 | orchestrator | 2025-02-10 08:56:55 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 08:56:55.045536 | orchestrator | 2025-02-10 08:56:55 | INFO  | Please wait and do not abort execution. 2025-02-10 08:56:55.047803 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-10 08:56:55.048368 | orchestrator | 2025-02-10 08:56:55.048635 | orchestrator | Monday 10 February 2025 08:56:55 +0000 (0:00:00.495) 0:00:06.151 ******* 2025-02-10 08:56:55.049016 | orchestrator | =============================================================================== 2025-02-10 08:56:55.050644 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 4.61s 2025-02-10 08:56:55.051706 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.50s 2025-02-10 08:56:55.051888 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.47s 2025-02-10 08:56:55.052205 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.41s 2025-02-10 08:56:55.052534 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-02-10 08:56:55.379186 | orchestrator | + osism apply known-hosts 2025-02-10 08:56:56.661665 | orchestrator | 2025-02-10 08:56:56 | INFO  | Task ea6a1260-05b1-4e0c-b530-59977426eea3 (known-hosts) was prepared for execution. 2025-02-10 08:56:59.053006 | orchestrator | 2025-02-10 08:56:56 | INFO  | It takes a moment until task ea6a1260-05b1-4e0c-b530-59977426eea3 (known-hosts) has been started and output is visible here. 2025-02-10 08:56:59.053200 | orchestrator | 2025-02-10 08:57:04.367369 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-02-10 08:57:04.367540 | orchestrator | 2025-02-10 08:57:04.367563 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-02-10 08:57:04.367581 | orchestrator | Monday 10 February 2025 08:56:59 +0000 (0:00:00.080) 0:00:00.080 ******* 2025-02-10 08:57:04.367616 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-02-10 08:57:04.367926 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-02-10 08:57:04.367965 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-02-10 08:57:04.368144 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-02-10 08:57:04.369948 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-02-10 08:57:04.370502 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-02-10 08:57:04.370993 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-02-10 08:57:04.371605 | orchestrator | 2025-02-10 08:57:04.372164 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-02-10 08:57:04.372519 | orchestrator | Monday 10 February 2025 08:57:04 +0000 (0:00:05.312) 0:00:05.393 ******* 2025-02-10 08:57:04.532997 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-02-10 08:57:04.533214 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-02-10 08:57:04.534385 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-02-10 08:57:04.535299 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-02-10 08:57:04.535872 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-02-10 08:57:04.536221 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-02-10 08:57:04.536991 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-02-10 08:57:04.537230 | orchestrator | 2025-02-10 08:57:04.537787 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-10 08:57:04.538178 | orchestrator | Monday 10 February 2025 08:57:04 +0000 (0:00:00.168) 0:00:05.561 ******* 2025-02-10 08:57:05.770685 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCgnP6P4WKzpPeKcK3ryOhTwO992TI8EhYaeX+i0nfV/td16ZukRMRYHDTZwGan2wXKdQmqieYLo6TWX4cxibCuhkoBjljVaahxyVqz7/fdZHGzlE8OUfuT04UL4SoPw4XODj9erIrmegB4fQpoqHhVP33Qf9C+UPUVHmrMKIGPfYwkg5l0tASbUyAoeHXzpHwUk2YUQzYjp3BVBQvXzjjMgwkDA+JPvtnU1DHcQYeweXhNvfgL53Suo6ouoxamVAREYyChagTqZKtPSthrmaXuwPs6H1arrdG/Icn/CqZTEPHHhE/Xd3nJPON2xaP+50bgHAWKtlO+FTelSof3ZXvjchb0+8H8zhmZZzvaZ30fUSTQIecU83d3vUWhibKqOdi5FADPwCIWEVGNnC/ZO1iVHjuLdPvoPSPKTOCg2Rv6LhiaIY87CwnvfH3uxpvixrJ3HBS2RrnNI8rmd3BavOdmuJboswsuiq7ekSGwYzLk2CrPah/HCjDcsOjfYCln7eE=) 2025-02-10 08:57:05.771536 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJKnJo946tukBegTSPCN/hEdgbcaRRadqMg5dYbBEw3YbnnZnGOc7UAcFlfPPmeQlOnsE0v7WfTfpdzrUG7olOA=) 2025-02-10 08:57:05.772805 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeb7Dv2rg/sUWIc/Nl6DR/+9rRv4hz/PlACdP/q+qML) 2025-02-10 08:57:05.773400 | orchestrator | 2025-02-10 08:57:05.773723 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-10 08:57:05.774798 | orchestrator | Monday 10 February 2025 08:57:05 +0000 (0:00:01.237) 0:00:06.798 ******* 2025-02-10 08:57:06.813772 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDN/NXSd9gyhylgXmbbCI9uN8zQuqhu4fMKLsE/V8RlqBJd0hZgGHRFqLY9xTI+G3dIStFz7wnoHGUNlWpTLfHOWgr7hlJ/THHIx2e3s8y2z7zpi9gUwRlWxPQ5zhw3iqYWBOjAOOntI+5hsQW++R1DBSsxzM+adnMV60kk7f3zrxAHbU9a1Cyv1ibhsfGzed1IlBmWDufu0fIAIms2FASbYES6NKipwDoAVCm6K7AFDkUZV71vGXwZQntM4fUNx+MsnO2Y6H+5DXFgsW+xJ9IqmQMHFdxHrMyiRA/+RtXHHDwo+TmymARfSnvpdTLWKtb2KNZ27BbSnMnlAc7C9WhnJyYFSnr09KQQPjg+FSKsKJ2ibNG8WbhuL/atbq8oVZ9bSN6XMAbWcvkkj2GjISZFNZirRovhBFFD5IuKTlO95o+cYioTeIVdq5qEPM3Cr9glqz8IwoEQ/arMvPllpXMyTxxFkTuIVM66EP8Wliu/BbiEQDlpkbA0Fret3IP9tAc=) 2025-02-10 08:57:06.814156 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOo9HR8B5cqxDn5f/3kSBJLxUuAyP5NVVedVXUHI3vFiKJVpBwbdrzFhJKsj7KJc+eARPGp2KOCJANKycN7IVIM=) 2025-02-10 08:57:06.815311 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOTtC82XsQo/kRFQ0q5Uk81NIBS1/X1VHxa0uy6nwtt7) 2025-02-10 08:57:06.815698 | orchestrator | 2025-02-10 08:57:06.816330 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-10 08:57:06.816833 | orchestrator | Monday 10 February 2025 08:57:06 +0000 (0:00:01.041) 0:00:07.840 ******* 2025-02-10 08:57:07.896625 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCsS6mkPDvnyMfTBBuSXiO0rjJTZpZfyE9BTZYG7gnFw4X08kkGzMgfTfjoQypYWs58B/DQw1hmuZH9tadWiEVjdJp+2Qy5g5OwiHcMaG835SS5hJ7zbLq23Opf600LMaPfwuCljfKqr2sXBKsvazQ/+CrcajGaw8fluROewQyXPgjbCilaeEevj6MqK0LXNkInkpYI6HmqxDjuaX5M6aKp6ML9NUGIJRDZqWEJ80NT2VCINmEwNCTBDiY/rEEftBSn9aQYJzz4zIVggD4C7N0yku40DIkmvoUBrDnddOt9cp6DIB80vhr3jB9qI/UaBrYDcUxxuujHlctItlxbXbx7nvfU+nKxmSfZ8aRa69cZtHUgoja1qKPpu+4LP5dw6YFFlioqMP262fvL+8DFK8uYri8bCn3qsFLREsDuqR3Z19mMX+IA0gvw69jKfCC8Rf7Vn4X1xkixm7AzZHB14Wg5BmpwwO26vFHZd0Gx2eHdj3EAzA4HicNkFmpVx7yK49s=) 2025-02-10 08:57:07.896937 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP1zQMezzaHwWtCdVaXRBFss2lsRFqRnz87m/QEwpTg0fxMoXW5tq09EypLKTubB4pkOb3dVdVFM+ykDqCWS2bA=) 2025-02-10 08:57:07.898200 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEfJcvEILDmlPPTW8fQKNVu+0zpobAc3Cz4Oczxd58Bq) 2025-02-10 08:57:07.898238 | orchestrator | 2025-02-10 08:57:08.997776 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-10 08:57:08.997876 | orchestrator | Monday 10 February 2025 08:57:07 +0000 (0:00:01.084) 0:00:08.924 ******* 2025-02-10 08:57:08.997900 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBCnoSiaB6IVDH797V/oEzqDFli8836me4scg7yJvHU48NCg0HqY4iisrM8QvKqQExiUOUYUCT5gD3SRtbfi7PHnULcG1yeWnCDbDdmHwRytLcG7/Y4K388YnznSApg54H+XCp7yls910xTdhe36I+wy1RgZCjamUQL83gWWzgH8RFSSWf13KcUbf3JozveLVwhnekq5TzMvygeYi8UcnZd3bD/uqL3U1d8acsZzMyTX9FFILLLN2+6E8Lz5DHwSQ5UY0xQfmAErKzWgkkof0C3dNiV6O4g2et8hXRuwayxsw32U5qYaoTatCemhKoyIbnAkJ3PSsJC0uLtIut37ybwMErBiLZQZ5DImgAYzoYuoRDi5wEytyvGowzbzlq71KfzD+Jdn8Bd46xUZawjuhxo1NS0Z0/YTKg+7+CZPyryUNwdol3sku8Kxu3Vk7RUdKGSd9yfC7PqXX089PMOUVa/WAAQdPMXJyc2aMiTCzi56AWMiI4ovM+PyNGX+u6wo8=) 2025-02-10 08:57:08.998224 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBN6VMteEKcrYNALnJrXM3XZ7wLbtq9f82O+9XQPtU6xvM8exY/aR754UJnafCXc0mECdqSUGAUGGM9TSddcIlk=) 2025-02-10 08:57:08.998258 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH9ODm6rK2WAbUGRYdmBmcJcx9XVV6Iu6dS7+C2iGoWk) 2025-02-10 08:57:08.998270 | orchestrator | 2025-02-10 08:57:08.998974 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-10 08:57:08.999317 | orchestrator | Monday 10 February 2025 08:57:08 +0000 (0:00:01.099) 0:00:10.024 ******* 2025-02-10 08:57:10.054238 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDSbgc0o8ppQHHsXeyaf1AByJOIya0zcBqEq7AflXdWqHj9gR6kcjpn9JxzNyehDaNpzvwPd5455rEblS1lJYQgdJbEZYrL4YM8QxjYbq/O6jxF3o6CqEdyCplwjrsUUtf/kvvtHokvPH453gPnoSQgO4h7uaP7P5X6uNZkGopkPh9ceqBXnx2yx/H0DcuNEMeZz8JOc+AmMfCmD2d9rb2W+NEPCdbFIA5nbukOdCbS99PzKP8YrsKaJuM4I9UlW1SfJwrBl++hICfcUUMselAWlYWG0gV0gOjcN7MY9NlNsPeJ/eiCmR0GT7lSJugejnAKw5AyFNndmIiprTFAjF84aounYRy/zfGKQsaQxiUMHTLRYK1M0lm+49YE43CuM+mF5A7zSOitHOi0J1sCrI36s89XGs9FfGkwy35P/e1SqNSAkCs2cJ07DDM6JB3RlOKoBAfumH+It1zrNfw24W4SRohdTN5ljywVUCQzsDdE42GaABHFjZQ8g0vH7HDLSf8=) 2025-02-10 08:57:10.055200 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHGXwknMDmqEr2rr1Jz4YJVxsrqypRlb1yEtgIK7MAQPi0KrrnDCvdNhsCe2LCMrapGQU/F9pEMtwYlKjNSBC1I=) 2025-02-10 08:57:10.055977 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHNkauqE1eHGZ/NkIQvsDM1ZDBQB8BLoWqTV8wAi8iEn) 2025-02-10 08:57:10.056613 | orchestrator | 2025-02-10 08:57:10.057289 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-10 08:57:10.057626 | orchestrator | Monday 10 February 2025 08:57:10 +0000 (0:00:01.057) 0:00:11.081 ******* 2025-02-10 08:57:11.109087 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvQkbbszGnAc9w61BbpHWXhwGTjs7mkFOzmizxIi3CH0j/tFnIWwjPaFndIqf5k+YDgfW6VAQxLnfmfKMkv39TgdKVtTE15T5JQsEsBlgZKgM3JWyA9uuf8d45WfI1T+MDL5SrNpG5RggMzMqstqYXYl6OydvBPuRBD/5KyrUJWaPLUen9XSMJGEYcABbUte6PShc+1jyRpOzogh20dXMAeLhFL6wi1bQjeLuuFfPRx7msGWft8KOCKJtlFKLSEcpo2QsqtTxPM4th7SfILIfQqx1SuRGsE2ifZtg95Mo3EcQH7BgP/J6gJp8UhLneBK1UehfE8GyrmRSI1+xaLNN5DzCN1h6I+pO5k8MG6o6R8sNjfm02VzATjMBhfLO4myZNIfg/d7QDPvzM4KsdEELd+HZiqkO9D05k4Q6HzMME2H2bmye8u0vKuvqy7OxmDOpUjgqOQwNAX+UAJURWArjIBoKLaTeB5I5tKI5R50CfNXddEDp2iQohKNd8bmc0EQs=) 2025-02-10 08:57:11.110946 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBARE5D79KYztG6bFq1CfooZBRjnPMYy6dGEGMmwssVdtAtP0dprz6Qz4teSa0Q+tcs17nHTLAB/GoEsyRm0e2Yk=) 2025-02-10 08:57:11.110999 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPXyhTvqjPubm5JZqEAEWsmUMYl64hwEslm9igT1pI5C) 2025-02-10 08:57:11.111027 | orchestrator | 2025-02-10 08:57:11.112281 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-10 08:57:11.112999 | orchestrator | Monday 10 February 2025 08:57:11 +0000 (0:00:01.053) 0:00:12.135 ******* 2025-02-10 08:57:12.278734 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOvDyNPxaHdHnmalppgXOnu8SD6I4jowesyx2WTz5dr997SpeViK/TlZkAgHHnT/Pk0ufUn8lf8WIiK4p0OVBRk=) 2025-02-10 08:57:12.278977 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDeN5SdAnimoInoYiVM4Ua4tHH2tBc70l+Eq0RyLiX1b) 2025-02-10 08:57:12.279656 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCqb46bCbVvYSeCSt2VCxe86L4MQhXgCRaStjanGAY4ZGO1CumAb6FZWsRZJfTL03blr647CJOuSq2HkR5iBnoaQf9w0P+CyBp76Hq2MLyd196q9joNipZsl6byXk/YQ0uRdqHyQfGg0sYMYg2cKPp3LGNJE4svNv7xcgQb876bTedY+MarwYuvexehoMPD2ek8sGSYx0Gbz+91byLz8cEagwzby2sMH/XlYwxuQECtdr5+VjKiu/GBVCZkKRiyH40D3YzlKFg9in+/OwTfZtep+mWZfOSkZ/U56vHAmWJg0818Rb57Q+sMjHKDvyWkPf0p51XZntzWtRdu4wmwn7Pxu87V9tLG+6q9cEgaxx/ysuNyTVVDimW1kh0zjl07kgiMBCl4tIRGymOabWNSfRovZHFf99et884GQuikHXkey4E8aLQiop5pqX4rSUpXmux8CAK1di9QvRjKew+ChzAS9XLzGaHs+fq33Q27BYKSuo0d3a1hkqoabAbLwOoXn4M=) 2025-02-10 08:57:12.280676 | orchestrator | 2025-02-10 08:57:12.281293 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-02-10 08:57:12.281778 | orchestrator | Monday 10 February 2025 08:57:12 +0000 (0:00:01.170) 0:00:13.305 ******* 2025-02-10 08:57:17.490636 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-02-10 08:57:17.491103 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-02-10 08:57:17.491147 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-02-10 08:57:17.491173 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-02-10 08:57:17.491884 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-02-10 08:57:17.493884 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-02-10 08:57:17.494309 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-02-10 08:57:17.494937 | orchestrator | 2025-02-10 08:57:17.496132 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-02-10 08:57:17.658923 | orchestrator | Monday 10 February 2025 08:57:17 +0000 (0:00:05.210) 0:00:18.516 ******* 2025-02-10 08:57:17.659087 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-02-10 08:57:17.660714 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-02-10 08:57:17.660752 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-02-10 08:57:17.661207 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-02-10 08:57:17.662719 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-02-10 08:57:17.663815 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-02-10 08:57:17.664787 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-02-10 08:57:17.665486 | orchestrator | 2025-02-10 08:57:17.665796 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-10 08:57:17.666226 | orchestrator | Monday 10 February 2025 08:57:17 +0000 (0:00:00.170) 0:00:18.686 ******* 2025-02-10 08:57:18.754660 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCgnP6P4WKzpPeKcK3ryOhTwO992TI8EhYaeX+i0nfV/td16ZukRMRYHDTZwGan2wXKdQmqieYLo6TWX4cxibCuhkoBjljVaahxyVqz7/fdZHGzlE8OUfuT04UL4SoPw4XODj9erIrmegB4fQpoqHhVP33Qf9C+UPUVHmrMKIGPfYwkg5l0tASbUyAoeHXzpHwUk2YUQzYjp3BVBQvXzjjMgwkDA+JPvtnU1DHcQYeweXhNvfgL53Suo6ouoxamVAREYyChagTqZKtPSthrmaXuwPs6H1arrdG/Icn/CqZTEPHHhE/Xd3nJPON2xaP+50bgHAWKtlO+FTelSof3ZXvjchb0+8H8zhmZZzvaZ30fUSTQIecU83d3vUWhibKqOdi5FADPwCIWEVGNnC/ZO1iVHjuLdPvoPSPKTOCg2Rv6LhiaIY87CwnvfH3uxpvixrJ3HBS2RrnNI8rmd3BavOdmuJboswsuiq7ekSGwYzLk2CrPah/HCjDcsOjfYCln7eE=) 2025-02-10 08:57:18.757342 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJKnJo946tukBegTSPCN/hEdgbcaRRadqMg5dYbBEw3YbnnZnGOc7UAcFlfPPmeQlOnsE0v7WfTfpdzrUG7olOA=) 2025-02-10 08:57:18.757500 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeb7Dv2rg/sUWIc/Nl6DR/+9rRv4hz/PlACdP/q+qML) 2025-02-10 08:57:18.760778 | orchestrator | 2025-02-10 08:57:18.761110 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-10 08:57:18.761154 | orchestrator | Monday 10 February 2025 08:57:18 +0000 (0:00:01.097) 0:00:19.783 ******* 2025-02-10 08:57:19.847183 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDN/NXSd9gyhylgXmbbCI9uN8zQuqhu4fMKLsE/V8RlqBJd0hZgGHRFqLY9xTI+G3dIStFz7wnoHGUNlWpTLfHOWgr7hlJ/THHIx2e3s8y2z7zpi9gUwRlWxPQ5zhw3iqYWBOjAOOntI+5hsQW++R1DBSsxzM+adnMV60kk7f3zrxAHbU9a1Cyv1ibhsfGzed1IlBmWDufu0fIAIms2FASbYES6NKipwDoAVCm6K7AFDkUZV71vGXwZQntM4fUNx+MsnO2Y6H+5DXFgsW+xJ9IqmQMHFdxHrMyiRA/+RtXHHDwo+TmymARfSnvpdTLWKtb2KNZ27BbSnMnlAc7C9WhnJyYFSnr09KQQPjg+FSKsKJ2ibNG8WbhuL/atbq8oVZ9bSN6XMAbWcvkkj2GjISZFNZirRovhBFFD5IuKTlO95o+cYioTeIVdq5qEPM3Cr9glqz8IwoEQ/arMvPllpXMyTxxFkTuIVM66EP8Wliu/BbiEQDlpkbA0Fret3IP9tAc=) 2025-02-10 08:57:19.847341 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOo9HR8B5cqxDn5f/3kSBJLxUuAyP5NVVedVXUHI3vFiKJVpBwbdrzFhJKsj7KJc+eARPGp2KOCJANKycN7IVIM=) 2025-02-10 08:57:19.848921 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOTtC82XsQo/kRFQ0q5Uk81NIBS1/X1VHxa0uy6nwtt7) 2025-02-10 08:57:19.850191 | orchestrator | 2025-02-10 08:57:19.850602 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-10 08:57:19.851278 | orchestrator | Monday 10 February 2025 08:57:19 +0000 (0:00:01.088) 0:00:20.872 ******* 2025-02-10 08:57:20.942374 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCsS6mkPDvnyMfTBBuSXiO0rjJTZpZfyE9BTZYG7gnFw4X08kkGzMgfTfjoQypYWs58B/DQw1hmuZH9tadWiEVjdJp+2Qy5g5OwiHcMaG835SS5hJ7zbLq23Opf600LMaPfwuCljfKqr2sXBKsvazQ/+CrcajGaw8fluROewQyXPgjbCilaeEevj6MqK0LXNkInkpYI6HmqxDjuaX5M6aKp6ML9NUGIJRDZqWEJ80NT2VCINmEwNCTBDiY/rEEftBSn9aQYJzz4zIVggD4C7N0yku40DIkmvoUBrDnddOt9cp6DIB80vhr3jB9qI/UaBrYDcUxxuujHlctItlxbXbx7nvfU+nKxmSfZ8aRa69cZtHUgoja1qKPpu+4LP5dw6YFFlioqMP262fvL+8DFK8uYri8bCn3qsFLREsDuqR3Z19mMX+IA0gvw69jKfCC8Rf7Vn4X1xkixm7AzZHB14Wg5BmpwwO26vFHZd0Gx2eHdj3EAzA4HicNkFmpVx7yK49s=) 2025-02-10 08:57:20.942703 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP1zQMezzaHwWtCdVaXRBFss2lsRFqRnz87m/QEwpTg0fxMoXW5tq09EypLKTubB4pkOb3dVdVFM+ykDqCWS2bA=) 2025-02-10 08:57:20.943274 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEfJcvEILDmlPPTW8fQKNVu+0zpobAc3Cz4Oczxd58Bq) 2025-02-10 08:57:20.944322 | orchestrator | 2025-02-10 08:57:20.945027 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-10 08:57:20.946005 | orchestrator | Monday 10 February 2025 08:57:20 +0000 (0:00:01.096) 0:00:21.968 ******* 2025-02-10 08:57:22.040717 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBCnoSiaB6IVDH797V/oEzqDFli8836me4scg7yJvHU48NCg0HqY4iisrM8QvKqQExiUOUYUCT5gD3SRtbfi7PHnULcG1yeWnCDbDdmHwRytLcG7/Y4K388YnznSApg54H+XCp7yls910xTdhe36I+wy1RgZCjamUQL83gWWzgH8RFSSWf13KcUbf3JozveLVwhnekq5TzMvygeYi8UcnZd3bD/uqL3U1d8acsZzMyTX9FFILLLN2+6E8Lz5DHwSQ5UY0xQfmAErKzWgkkof0C3dNiV6O4g2et8hXRuwayxsw32U5qYaoTatCemhKoyIbnAkJ3PSsJC0uLtIut37ybwMErBiLZQZ5DImgAYzoYuoRDi5wEytyvGowzbzlq71KfzD+Jdn8Bd46xUZawjuhxo1NS0Z0/YTKg+7+CZPyryUNwdol3sku8Kxu3Vk7RUdKGSd9yfC7PqXX089PMOUVa/WAAQdPMXJyc2aMiTCzi56AWMiI4ovM+PyNGX+u6wo8=) 2025-02-10 08:57:22.041221 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBN6VMteEKcrYNALnJrXM3XZ7wLbtq9f82O+9XQPtU6xvM8exY/aR754UJnafCXc0mECdqSUGAUGGM9TSddcIlk=) 2025-02-10 08:57:22.041264 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH9ODm6rK2WAbUGRYdmBmcJcx9XVV6Iu6dS7+C2iGoWk) 2025-02-10 08:57:22.041321 | orchestrator | 2025-02-10 08:57:22.041848 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-10 08:57:22.042087 | orchestrator | Monday 10 February 2025 08:57:22 +0000 (0:00:01.097) 0:00:23.066 ******* 2025-02-10 08:57:23.136159 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDSbgc0o8ppQHHsXeyaf1AByJOIya0zcBqEq7AflXdWqHj9gR6kcjpn9JxzNyehDaNpzvwPd5455rEblS1lJYQgdJbEZYrL4YM8QxjYbq/O6jxF3o6CqEdyCplwjrsUUtf/kvvtHokvPH453gPnoSQgO4h7uaP7P5X6uNZkGopkPh9ceqBXnx2yx/H0DcuNEMeZz8JOc+AmMfCmD2d9rb2W+NEPCdbFIA5nbukOdCbS99PzKP8YrsKaJuM4I9UlW1SfJwrBl++hICfcUUMselAWlYWG0gV0gOjcN7MY9NlNsPeJ/eiCmR0GT7lSJugejnAKw5AyFNndmIiprTFAjF84aounYRy/zfGKQsaQxiUMHTLRYK1M0lm+49YE43CuM+mF5A7zSOitHOi0J1sCrI36s89XGs9FfGkwy35P/e1SqNSAkCs2cJ07DDM6JB3RlOKoBAfumH+It1zrNfw24W4SRohdTN5ljywVUCQzsDdE42GaABHFjZQ8g0vH7HDLSf8=) 2025-02-10 08:57:23.136531 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHGXwknMDmqEr2rr1Jz4YJVxsrqypRlb1yEtgIK7MAQPi0KrrnDCvdNhsCe2LCMrapGQU/F9pEMtwYlKjNSBC1I=) 2025-02-10 08:57:23.136571 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHNkauqE1eHGZ/NkIQvsDM1ZDBQB8BLoWqTV8wAi8iEn) 2025-02-10 08:57:23.136600 | orchestrator | 2025-02-10 08:57:23.137534 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-10 08:57:23.137572 | orchestrator | Monday 10 February 2025 08:57:23 +0000 (0:00:01.095) 0:00:24.161 ******* 2025-02-10 08:57:24.239641 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvQkbbszGnAc9w61BbpHWXhwGTjs7mkFOzmizxIi3CH0j/tFnIWwjPaFndIqf5k+YDgfW6VAQxLnfmfKMkv39TgdKVtTE15T5JQsEsBlgZKgM3JWyA9uuf8d45WfI1T+MDL5SrNpG5RggMzMqstqYXYl6OydvBPuRBD/5KyrUJWaPLUen9XSMJGEYcABbUte6PShc+1jyRpOzogh20dXMAeLhFL6wi1bQjeLuuFfPRx7msGWft8KOCKJtlFKLSEcpo2QsqtTxPM4th7SfILIfQqx1SuRGsE2ifZtg95Mo3EcQH7BgP/J6gJp8UhLneBK1UehfE8GyrmRSI1+xaLNN5DzCN1h6I+pO5k8MG6o6R8sNjfm02VzATjMBhfLO4myZNIfg/d7QDPvzM4KsdEELd+HZiqkO9D05k4Q6HzMME2H2bmye8u0vKuvqy7OxmDOpUjgqOQwNAX+UAJURWArjIBoKLaTeB5I5tKI5R50CfNXddEDp2iQohKNd8bmc0EQs=) 2025-02-10 08:57:24.239914 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBARE5D79KYztG6bFq1CfooZBRjnPMYy6dGEGMmwssVdtAtP0dprz6Qz4teSa0Q+tcs17nHTLAB/GoEsyRm0e2Yk=) 2025-02-10 08:57:24.240605 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPXyhTvqjPubm5JZqEAEWsmUMYl64hwEslm9igT1pI5C) 2025-02-10 08:57:24.241920 | orchestrator | 2025-02-10 08:57:24.242149 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-10 08:57:24.243035 | orchestrator | Monday 10 February 2025 08:57:24 +0000 (0:00:01.105) 0:00:25.267 ******* 2025-02-10 08:57:25.324036 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCqb46bCbVvYSeCSt2VCxe86L4MQhXgCRaStjanGAY4ZGO1CumAb6FZWsRZJfTL03blr647CJOuSq2HkR5iBnoaQf9w0P+CyBp76Hq2MLyd196q9joNipZsl6byXk/YQ0uRdqHyQfGg0sYMYg2cKPp3LGNJE4svNv7xcgQb876bTedY+MarwYuvexehoMPD2ek8sGSYx0Gbz+91byLz8cEagwzby2sMH/XlYwxuQECtdr5+VjKiu/GBVCZkKRiyH40D3YzlKFg9in+/OwTfZtep+mWZfOSkZ/U56vHAmWJg0818Rb57Q+sMjHKDvyWkPf0p51XZntzWtRdu4wmwn7Pxu87V9tLG+6q9cEgaxx/ysuNyTVVDimW1kh0zjl07kgiMBCl4tIRGymOabWNSfRovZHFf99et884GQuikHXkey4E8aLQiop5pqX4rSUpXmux8CAK1di9QvRjKew+ChzAS9XLzGaHs+fq33Q27BYKSuo0d3a1hkqoabAbLwOoXn4M=) 2025-02-10 08:57:25.324984 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOvDyNPxaHdHnmalppgXOnu8SD6I4jowesyx2WTz5dr997SpeViK/TlZkAgHHnT/Pk0ufUn8lf8WIiK4p0OVBRk=) 2025-02-10 08:57:25.325347 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDeN5SdAnimoInoYiVM4Ua4tHH2tBc70l+Eq0RyLiX1b) 2025-02-10 08:57:25.326216 | orchestrator | 2025-02-10 08:57:25.326673 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-02-10 08:57:25.327058 | orchestrator | Monday 10 February 2025 08:57:25 +0000 (0:00:01.083) 0:00:26.350 ******* 2025-02-10 08:57:25.494127 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-02-10 08:57:25.495206 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-02-10 08:57:25.495261 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-02-10 08:57:25.495699 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-02-10 08:57:25.496575 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-02-10 08:57:25.496920 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-02-10 08:57:25.497721 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-02-10 08:57:25.498158 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:57:25.498821 | orchestrator | 2025-02-10 08:57:25.499086 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-02-10 08:57:25.499556 | orchestrator | Monday 10 February 2025 08:57:25 +0000 (0:00:00.170) 0:00:26.521 ******* 2025-02-10 08:57:25.552290 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:57:25.552943 | orchestrator | 2025-02-10 08:57:25.553699 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-02-10 08:57:25.554014 | orchestrator | Monday 10 February 2025 08:57:25 +0000 (0:00:00.059) 0:00:26.581 ******* 2025-02-10 08:57:25.618701 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:57:25.619345 | orchestrator | 2025-02-10 08:57:25.619726 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-02-10 08:57:25.620404 | orchestrator | Monday 10 February 2025 08:57:25 +0000 (0:00:00.065) 0:00:26.647 ******* 2025-02-10 08:57:26.101603 | orchestrator | changed: [testbed-manager] 2025-02-10 08:57:26.102531 | orchestrator | 2025-02-10 08:57:26.103190 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 08:57:26.103646 | orchestrator | 2025-02-10 08:57:26 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 08:57:26.104225 | orchestrator | 2025-02-10 08:57:26 | INFO  | Please wait and do not abort execution. 2025-02-10 08:57:26.104588 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-10 08:57:26.105801 | orchestrator | 2025-02-10 08:57:26.106287 | orchestrator | Monday 10 February 2025 08:57:26 +0000 (0:00:00.482) 0:00:27.129 ******* 2025-02-10 08:57:26.107248 | orchestrator | =============================================================================== 2025-02-10 08:57:26.108162 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.31s 2025-02-10 08:57:26.108951 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.21s 2025-02-10 08:57:26.109112 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.24s 2025-02-10 08:57:26.109762 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2025-02-10 08:57:26.110074 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-02-10 08:57:26.110516 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-02-10 08:57:26.111491 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-02-10 08:57:26.111802 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-02-10 08:57:26.112097 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-02-10 08:57:26.112304 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-02-10 08:57:26.112579 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-02-10 08:57:26.112959 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-02-10 08:57:26.113932 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-02-10 08:57:26.114164 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-02-10 08:57:26.114299 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-02-10 08:57:26.115279 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-02-10 08:57:26.115665 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.48s 2025-02-10 08:57:26.116039 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2025-02-10 08:57:26.116746 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2025-02-10 08:57:26.117004 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2025-02-10 08:57:26.523190 | orchestrator | ++ semver 8.1.0 7.0.0 2025-02-10 08:57:26.568518 | orchestrator | + [[ 1 -ge 0 ]] 2025-02-10 08:57:27.956291 | orchestrator | + osism apply nexus 2025-02-10 08:57:27.956524 | orchestrator | 2025-02-10 08:57:27 | INFO  | Task a09e18e3-f53b-43f2-9806-d0e710365816 (nexus) was prepared for execution. 2025-02-10 08:57:30.968760 | orchestrator | 2025-02-10 08:57:27 | INFO  | It takes a moment until task a09e18e3-f53b-43f2-9806-d0e710365816 (nexus) has been started and output is visible here. 2025-02-10 08:57:30.968970 | orchestrator | 2025-02-10 08:57:30.969369 | orchestrator | PLAY [Apply role nexus] ******************************************************** 2025-02-10 08:57:30.969412 | orchestrator | 2025-02-10 08:57:30.969476 | orchestrator | TASK [osism.services.nexus : Include config tasks] ***************************** 2025-02-10 08:57:30.970246 | orchestrator | Monday 10 February 2025 08:57:30 +0000 (0:00:00.105) 0:00:00.105 ******* 2025-02-10 08:57:31.055581 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/config.yml for testbed-manager 2025-02-10 08:57:31.055725 | orchestrator | 2025-02-10 08:57:31.056814 | orchestrator | TASK [osism.services.nexus : Create required directories] ********************** 2025-02-10 08:57:31.057125 | orchestrator | Monday 10 February 2025 08:57:31 +0000 (0:00:00.091) 0:00:00.197 ******* 2025-02-10 08:57:31.932920 | orchestrator | changed: [testbed-manager] => (item=/opt/nexus) 2025-02-10 08:57:31.933079 | orchestrator | changed: [testbed-manager] => (item=/opt/nexus/configuration) 2025-02-10 08:57:31.933100 | orchestrator | 2025-02-10 08:57:31.934432 | orchestrator | TASK [osism.services.nexus : Set UID for nexus_configuration_directory] ******** 2025-02-10 08:57:31.935600 | orchestrator | Monday 10 February 2025 08:57:31 +0000 (0:00:00.876) 0:00:01.073 ******* 2025-02-10 08:57:32.304915 | orchestrator | changed: [testbed-manager] 2025-02-10 08:57:32.305121 | orchestrator | 2025-02-10 08:57:32.306445 | orchestrator | TASK [osism.services.nexus : Copy configuration files] ************************* 2025-02-10 08:57:32.307240 | orchestrator | Monday 10 February 2025 08:57:32 +0000 (0:00:00.370) 0:00:01.444 ******* 2025-02-10 08:57:34.240254 | orchestrator | changed: [testbed-manager] => (item=nexus.properties) 2025-02-10 08:57:34.334994 | orchestrator | changed: [testbed-manager] => (item=nexus.env) 2025-02-10 08:57:34.335132 | orchestrator | 2025-02-10 08:57:34.335153 | orchestrator | TASK [osism.services.nexus : Include service tasks] **************************** 2025-02-10 08:57:34.335169 | orchestrator | Monday 10 February 2025 08:57:34 +0000 (0:00:01.936) 0:00:03.381 ******* 2025-02-10 08:57:34.335203 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/service.yml for testbed-manager 2025-02-10 08:57:34.335524 | orchestrator | 2025-02-10 08:57:34.336169 | orchestrator | TASK [osism.services.nexus : Copy nexus systemd unit file] ********************* 2025-02-10 08:57:34.336974 | orchestrator | Monday 10 February 2025 08:57:34 +0000 (0:00:00.095) 0:00:03.476 ******* 2025-02-10 08:57:35.210757 | orchestrator | changed: [testbed-manager] 2025-02-10 08:57:35.211073 | orchestrator | 2025-02-10 08:57:35.213182 | orchestrator | TASK [osism.services.nexus : Create traefik external network] ****************** 2025-02-10 08:57:36.003995 | orchestrator | Monday 10 February 2025 08:57:35 +0000 (0:00:00.873) 0:00:04.350 ******* 2025-02-10 08:57:36.004136 | orchestrator | ok: [testbed-manager] 2025-02-10 08:57:36.007380 | orchestrator | 2025-02-10 08:57:36.008926 | orchestrator | TASK [osism.services.nexus : Copy docker-compose.yml file] ********************* 2025-02-10 08:57:36.008989 | orchestrator | Monday 10 February 2025 08:57:35 +0000 (0:00:00.794) 0:00:05.144 ******* 2025-02-10 08:57:37.005003 | orchestrator | changed: [testbed-manager] 2025-02-10 08:57:37.007000 | orchestrator | 2025-02-10 08:57:37.007350 | orchestrator | TASK [osism.services.nexus : Stop and disable old service docker-compose@nexus] *** 2025-02-10 08:57:37.007406 | orchestrator | Monday 10 February 2025 08:57:36 +0000 (0:00:00.998) 0:00:06.143 ******* 2025-02-10 08:57:37.974000 | orchestrator | ok: [testbed-manager] 2025-02-10 08:57:37.974308 | orchestrator | 2025-02-10 08:57:37.974772 | orchestrator | TASK [osism.services.nexus : Manage nexus service] ***************************** 2025-02-10 08:57:37.975635 | orchestrator | Monday 10 February 2025 08:57:37 +0000 (0:00:00.969) 0:00:07.113 ******* 2025-02-10 08:57:39.402203 | orchestrator | changed: [testbed-manager] 2025-02-10 08:57:39.403269 | orchestrator | 2025-02-10 08:57:39.403547 | orchestrator | TASK [osism.services.nexus : Register that nexus service was started] ********** 2025-02-10 08:57:39.403578 | orchestrator | Monday 10 February 2025 08:57:39 +0000 (0:00:01.428) 0:00:08.541 ******* 2025-02-10 08:57:39.503836 | orchestrator | ok: [testbed-manager] 2025-02-10 08:57:39.504954 | orchestrator | 2025-02-10 08:57:39.506236 | orchestrator | TASK [osism.services.nexus : Flush handlers] *********************************** 2025-02-10 08:57:39.507231 | orchestrator | Monday 10 February 2025 08:57:39 +0000 (0:00:00.060) 0:00:08.601 ******* 2025-02-10 08:57:39.507924 | orchestrator | 2025-02-10 08:57:39.508578 | orchestrator | RUNNING HANDLER [osism.services.nexus : Restart nexus service] ***************** 2025-02-10 08:57:39.509648 | orchestrator | Monday 10 February 2025 08:57:39 +0000 (0:00:00.042) 0:00:08.644 ******* 2025-02-10 08:57:39.577738 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:57:39.578549 | orchestrator | 2025-02-10 08:57:39.578595 | orchestrator | RUNNING HANDLER [osism.services.nexus : Wait for nexus service to start] ******* 2025-02-10 08:57:39.578879 | orchestrator | Monday 10 February 2025 08:57:39 +0000 (0:00:00.073) 0:00:08.717 ******* 2025-02-10 08:58:39.652069 | orchestrator | Pausing for 60 seconds 2025-02-10 08:58:40.187418 | orchestrator | changed: [testbed-manager] 2025-02-10 08:58:40.187637 | orchestrator | 2025-02-10 08:58:40.187663 | orchestrator | RUNNING HANDLER [osism.services.nexus : Ensure that all containers are up] ***** 2025-02-10 08:58:40.187680 | orchestrator | Monday 10 February 2025 08:58:39 +0000 (0:01:00.067) 0:01:08.784 ******* 2025-02-10 08:58:40.187714 | orchestrator | changed: [testbed-manager] 2025-02-10 08:59:01.027385 | orchestrator | 2025-02-10 08:59:01.027660 | orchestrator | RUNNING HANDLER [osism.services.nexus : Wait for an healthy nexus service] ***** 2025-02-10 08:59:01.027700 | orchestrator | Monday 10 February 2025 08:58:40 +0000 (0:00:00.537) 0:01:09.322 ******* 2025-02-10 08:59:01.027753 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy nexus service (50 retries left). 2025-02-10 08:59:01.131644 | orchestrator | changed: [testbed-manager] 2025-02-10 08:59:01.131812 | orchestrator | 2025-02-10 08:59:01.131832 | orchestrator | TASK [osism.services.nexus : Include initialize tasks] ************************* 2025-02-10 08:59:01.131845 | orchestrator | Monday 10 February 2025 08:59:01 +0000 (0:00:20.841) 0:01:30.163 ******* 2025-02-10 08:59:01.131874 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/initialize.yml for testbed-manager 2025-02-10 08:59:01.131936 | orchestrator | 2025-02-10 08:59:01.133195 | orchestrator | TASK [osism.services.nexus : Get setup admin password] ************************* 2025-02-10 08:59:01.133515 | orchestrator | Monday 10 February 2025 08:59:01 +0000 (0:00:00.108) 0:01:30.272 ******* 2025-02-10 08:59:02.133828 | orchestrator | changed: [testbed-manager] 2025-02-10 08:59:02.135819 | orchestrator | 2025-02-10 08:59:02.135850 | orchestrator | TASK [osism.services.nexus : Set setup admin password] ************************* 2025-02-10 08:59:02.196956 | orchestrator | Monday 10 February 2025 08:59:02 +0000 (0:00:01.001) 0:01:31.274 ******* 2025-02-10 08:59:02.197067 | orchestrator | ok: [testbed-manager] 2025-02-10 08:59:02.197343 | orchestrator | 2025-02-10 08:59:02.198910 | orchestrator | TASK [osism.services.nexus : Provision scripts included in the container image] *** 2025-02-10 08:59:02.200000 | orchestrator | Monday 10 February 2025 08:59:02 +0000 (0:00:00.065) 0:01:31.339 ******* 2025-02-10 08:59:05.639538 | orchestrator | changed: [testbed-manager] => (item=anonymous.json) 2025-02-10 08:59:05.639791 | orchestrator | changed: [testbed-manager] => (item=cleanup.json) 2025-02-10 08:59:05.639820 | orchestrator | 2025-02-10 08:59:05.639836 | orchestrator | TASK [osism.services.nexus : Provision scripts included in this ansible role] *** 2025-02-10 08:59:05.639858 | orchestrator | Monday 10 February 2025 08:59:05 +0000 (0:00:03.440) 0:01:34.779 ******* 2025-02-10 08:59:05.752155 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/declare-script.yml for testbed-manager => (item=create_repos_from_list) 2025-02-10 08:59:05.753072 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/declare-script.yml for testbed-manager => (item=setup_http_proxy) 2025-02-10 08:59:05.754012 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/declare-script.yml for testbed-manager => (item=setup_realms) 2025-02-10 08:59:05.754110 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/declare-script.yml for testbed-manager => (item=update_admin_password) 2025-02-10 08:59:05.754884 | orchestrator | 2025-02-10 08:59:05.755606 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-10 08:59:05.756099 | orchestrator | Monday 10 February 2025 08:59:05 +0000 (0:00:00.114) 0:01:34.894 ******* 2025-02-10 08:59:05.827294 | orchestrator | ok: [testbed-manager] 2025-02-10 08:59:05.827537 | orchestrator | 2025-02-10 08:59:05.828347 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-10 08:59:05.828984 | orchestrator | Monday 10 February 2025 08:59:05 +0000 (0:00:00.075) 0:01:34.969 ******* 2025-02-10 08:59:05.950299 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:59:05.950493 | orchestrator | 2025-02-10 08:59:05.950517 | orchestrator | TASK [osism.services.nexus : Wait for nexus] *********************************** 2025-02-10 08:59:05.950539 | orchestrator | Monday 10 February 2025 08:59:05 +0000 (0:00:00.122) 0:01:35.092 ******* 2025-02-10 08:59:06.694486 | orchestrator | ok: [testbed-manager] 2025-02-10 08:59:06.694773 | orchestrator | 2025-02-10 08:59:06.694794 | orchestrator | TASK [osism.services.nexus : Deleting script create_repos_from_list] *********** 2025-02-10 08:59:06.695574 | orchestrator | Monday 10 February 2025 08:59:06 +0000 (0:00:00.740) 0:01:35.833 ******* 2025-02-10 08:59:07.349551 | orchestrator | ok: [testbed-manager] 2025-02-10 08:59:07.349925 | orchestrator | 2025-02-10 08:59:07.350146 | orchestrator | TASK [osism.services.nexus : Declaring script create_repos_from_list] ********** 2025-02-10 08:59:07.351428 | orchestrator | Monday 10 February 2025 08:59:07 +0000 (0:00:00.658) 0:01:36.491 ******* 2025-02-10 08:59:07.976035 | orchestrator | ok: [testbed-manager] 2025-02-10 08:59:07.977106 | orchestrator | 2025-02-10 08:59:07.977151 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-10 08:59:08.050697 | orchestrator | Monday 10 February 2025 08:59:07 +0000 (0:00:00.624) 0:01:37.115 ******* 2025-02-10 08:59:08.050839 | orchestrator | ok: [testbed-manager] 2025-02-10 08:59:08.052216 | orchestrator | 2025-02-10 08:59:08.052247 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-10 08:59:08.052787 | orchestrator | Monday 10 February 2025 08:59:08 +0000 (0:00:00.075) 0:01:37.191 ******* 2025-02-10 08:59:08.109037 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:59:08.111494 | orchestrator | 2025-02-10 08:59:08.765503 | orchestrator | TASK [osism.services.nexus : Wait for nexus] *********************************** 2025-02-10 08:59:08.765676 | orchestrator | Monday 10 February 2025 08:59:08 +0000 (0:00:00.057) 0:01:37.248 ******* 2025-02-10 08:59:08.765716 | orchestrator | ok: [testbed-manager] 2025-02-10 08:59:08.765799 | orchestrator | 2025-02-10 08:59:08.766545 | orchestrator | TASK [osism.services.nexus : Deleting script setup_http_proxy] ***************** 2025-02-10 08:59:08.767309 | orchestrator | Monday 10 February 2025 08:59:08 +0000 (0:00:00.656) 0:01:37.905 ******* 2025-02-10 08:59:09.422661 | orchestrator | ok: [testbed-manager] 2025-02-10 08:59:09.422897 | orchestrator | 2025-02-10 08:59:09.423648 | orchestrator | TASK [osism.services.nexus : Declaring script setup_http_proxy] **************** 2025-02-10 08:59:09.423856 | orchestrator | Monday 10 February 2025 08:59:09 +0000 (0:00:00.659) 0:01:38.564 ******* 2025-02-10 08:59:10.138247 | orchestrator | ok: [testbed-manager] 2025-02-10 08:59:10.138641 | orchestrator | 2025-02-10 08:59:10.138683 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-10 08:59:10.139680 | orchestrator | Monday 10 February 2025 08:59:10 +0000 (0:00:00.713) 0:01:39.277 ******* 2025-02-10 08:59:10.199600 | orchestrator | ok: [testbed-manager] 2025-02-10 08:59:10.200863 | orchestrator | 2025-02-10 08:59:10.202158 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-10 08:59:10.202975 | orchestrator | Monday 10 February 2025 08:59:10 +0000 (0:00:00.063) 0:01:39.341 ******* 2025-02-10 08:59:10.263392 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:59:10.264352 | orchestrator | 2025-02-10 08:59:10.266002 | orchestrator | TASK [osism.services.nexus : Wait for nexus] *********************************** 2025-02-10 08:59:10.266705 | orchestrator | Monday 10 February 2025 08:59:10 +0000 (0:00:00.062) 0:01:39.403 ******* 2025-02-10 08:59:10.902630 | orchestrator | ok: [testbed-manager] 2025-02-10 08:59:10.903228 | orchestrator | 2025-02-10 08:59:10.904125 | orchestrator | TASK [osism.services.nexus : Deleting script setup_realms] ********************* 2025-02-10 08:59:10.904815 | orchestrator | Monday 10 February 2025 08:59:10 +0000 (0:00:00.637) 0:01:40.041 ******* 2025-02-10 08:59:11.608223 | orchestrator | ok: [testbed-manager] 2025-02-10 08:59:11.608818 | orchestrator | 2025-02-10 08:59:11.609427 | orchestrator | TASK [osism.services.nexus : Declaring script setup_realms] ******************** 2025-02-10 08:59:11.612169 | orchestrator | Monday 10 February 2025 08:59:11 +0000 (0:00:00.707) 0:01:40.748 ******* 2025-02-10 08:59:12.282713 | orchestrator | ok: [testbed-manager] 2025-02-10 08:59:12.283174 | orchestrator | 2025-02-10 08:59:12.284101 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-10 08:59:12.285322 | orchestrator | Monday 10 February 2025 08:59:12 +0000 (0:00:00.674) 0:01:41.423 ******* 2025-02-10 08:59:12.349672 | orchestrator | ok: [testbed-manager] 2025-02-10 08:59:12.350185 | orchestrator | 2025-02-10 08:59:12.351648 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-10 08:59:12.352593 | orchestrator | Monday 10 February 2025 08:59:12 +0000 (0:00:00.068) 0:01:41.491 ******* 2025-02-10 08:59:12.422366 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:59:12.422650 | orchestrator | 2025-02-10 08:59:12.423410 | orchestrator | TASK [osism.services.nexus : Wait for nexus] *********************************** 2025-02-10 08:59:12.424247 | orchestrator | Monday 10 February 2025 08:59:12 +0000 (0:00:00.072) 0:01:41.563 ******* 2025-02-10 08:59:13.081985 | orchestrator | ok: [testbed-manager] 2025-02-10 08:59:13.082320 | orchestrator | 2025-02-10 08:59:13.083277 | orchestrator | TASK [osism.services.nexus : Deleting script update_admin_password] ************ 2025-02-10 08:59:13.083592 | orchestrator | Monday 10 February 2025 08:59:13 +0000 (0:00:00.657) 0:01:42.221 ******* 2025-02-10 08:59:13.768687 | orchestrator | ok: [testbed-manager] 2025-02-10 08:59:13.769797 | orchestrator | 2025-02-10 08:59:13.769855 | orchestrator | TASK [osism.services.nexus : Declaring script update_admin_password] *********** 2025-02-10 08:59:13.770300 | orchestrator | Monday 10 February 2025 08:59:13 +0000 (0:00:00.688) 0:01:42.909 ******* 2025-02-10 08:59:14.500526 | orchestrator | ok: [testbed-manager] 2025-02-10 08:59:14.501814 | orchestrator | 2025-02-10 08:59:14.639101 | orchestrator | TASK [osism.services.nexus : Set admin password] ******************************* 2025-02-10 08:59:14.639246 | orchestrator | Monday 10 February 2025 08:59:14 +0000 (0:00:00.731) 0:01:43.641 ******* 2025-02-10 08:59:14.639298 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/call-script.yml for testbed-manager 2025-02-10 08:59:14.639858 | orchestrator | 2025-02-10 08:59:14.642572 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-10 08:59:14.642960 | orchestrator | Monday 10 February 2025 08:59:14 +0000 (0:00:00.136) 0:01:43.777 ******* 2025-02-10 08:59:14.715603 | orchestrator | ok: [testbed-manager] 2025-02-10 08:59:14.717241 | orchestrator | 2025-02-10 08:59:14.717428 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-10 08:59:14.717486 | orchestrator | Monday 10 February 2025 08:59:14 +0000 (0:00:00.078) 0:01:43.856 ******* 2025-02-10 08:59:14.786086 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:59:14.787407 | orchestrator | 2025-02-10 08:59:15.415855 | orchestrator | TASK [osism.services.nexus : Wait for nexus] *********************************** 2025-02-10 08:59:15.415991 | orchestrator | Monday 10 February 2025 08:59:14 +0000 (0:00:00.071) 0:01:43.928 ******* 2025-02-10 08:59:15.416026 | orchestrator | ok: [testbed-manager] 2025-02-10 08:59:15.416833 | orchestrator | 2025-02-10 08:59:15.416925 | orchestrator | TASK [osism.services.nexus : Calling script update_admin_password] ************* 2025-02-10 08:59:15.416948 | orchestrator | Monday 10 February 2025 08:59:15 +0000 (0:00:00.627) 0:01:44.555 ******* 2025-02-10 08:59:17.088972 | orchestrator | ok: [testbed-manager] 2025-02-10 08:59:17.149814 | orchestrator | 2025-02-10 08:59:17.149958 | orchestrator | TASK [osism.services.nexus : Set new admin password] *************************** 2025-02-10 08:59:17.149979 | orchestrator | Monday 10 February 2025 08:59:17 +0000 (0:00:01.671) 0:01:46.226 ******* 2025-02-10 08:59:17.150072 | orchestrator | ok: [testbed-manager] 2025-02-10 08:59:19.095043 | orchestrator | 2025-02-10 08:59:19.095187 | orchestrator | TASK [osism.services.nexus : Allow anonymous access] *************************** 2025-02-10 08:59:19.095209 | orchestrator | Monday 10 February 2025 08:59:17 +0000 (0:00:00.065) 0:01:46.292 ******* 2025-02-10 08:59:19.095249 | orchestrator | changed: [testbed-manager] 2025-02-10 08:59:21.184304 | orchestrator | 2025-02-10 08:59:21.184551 | orchestrator | TASK [osism.services.nexus : Cleanup default repositories] ********************* 2025-02-10 08:59:21.184577 | orchestrator | Monday 10 February 2025 08:59:19 +0000 (0:00:01.940) 0:01:48.232 ******* 2025-02-10 08:59:21.184612 | orchestrator | changed: [testbed-manager] 2025-02-10 08:59:21.184698 | orchestrator | 2025-02-10 08:59:21.185038 | orchestrator | TASK [osism.services.nexus : Setup http proxy] ********************************* 2025-02-10 08:59:21.185886 | orchestrator | Monday 10 February 2025 08:59:21 +0000 (0:00:02.090) 0:01:50.323 ******* 2025-02-10 08:59:21.304695 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/call-script.yml for testbed-manager 2025-02-10 08:59:21.304959 | orchestrator | 2025-02-10 08:59:21.305741 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-10 08:59:21.306417 | orchestrator | Monday 10 February 2025 08:59:21 +0000 (0:00:00.123) 0:01:50.446 ******* 2025-02-10 08:59:21.372303 | orchestrator | ok: [testbed-manager] 2025-02-10 08:59:21.373103 | orchestrator | 2025-02-10 08:59:21.373498 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-10 08:59:21.379848 | orchestrator | Monday 10 February 2025 08:59:21 +0000 (0:00:00.067) 0:01:50.514 ******* 2025-02-10 08:59:21.426967 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:59:21.427853 | orchestrator | 2025-02-10 08:59:21.429679 | orchestrator | TASK [osism.services.nexus : Wait for nexus] *********************************** 2025-02-10 08:59:21.430173 | orchestrator | Monday 10 February 2025 08:59:21 +0000 (0:00:00.054) 0:01:50.568 ******* 2025-02-10 08:59:22.144855 | orchestrator | ok: [testbed-manager] 2025-02-10 08:59:22.145423 | orchestrator | 2025-02-10 08:59:22.146642 | orchestrator | TASK [osism.services.nexus : Calling script setup_http_proxy] ****************** 2025-02-10 08:59:22.147284 | orchestrator | Monday 10 February 2025 08:59:22 +0000 (0:00:00.711) 0:01:51.280 ******* 2025-02-10 08:59:23.201661 | orchestrator | ok: [testbed-manager] 2025-02-10 08:59:23.204468 | orchestrator | 2025-02-10 08:59:23.204985 | orchestrator | TASK [osism.services.nexus : Setup realms] ************************************* 2025-02-10 08:59:23.206201 | orchestrator | Monday 10 February 2025 08:59:23 +0000 (0:00:01.060) 0:01:52.340 ******* 2025-02-10 08:59:23.308133 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/call-script.yml for testbed-manager 2025-02-10 08:59:23.309074 | orchestrator | 2025-02-10 08:59:23.310277 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-10 08:59:23.310723 | orchestrator | Monday 10 February 2025 08:59:23 +0000 (0:00:00.108) 0:01:52.449 ******* 2025-02-10 08:59:23.481105 | orchestrator | ok: [testbed-manager] 2025-02-10 08:59:23.484577 | orchestrator | 2025-02-10 08:59:23.484625 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-10 08:59:23.484642 | orchestrator | Monday 10 February 2025 08:59:23 +0000 (0:00:00.172) 0:01:52.621 ******* 2025-02-10 08:59:23.552683 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:59:23.553036 | orchestrator | 2025-02-10 08:59:23.556434 | orchestrator | TASK [osism.services.nexus : Wait for nexus] *********************************** 2025-02-10 08:59:23.557085 | orchestrator | Monday 10 February 2025 08:59:23 +0000 (0:00:00.071) 0:01:52.693 ******* 2025-02-10 08:59:24.206554 | orchestrator | ok: [testbed-manager] 2025-02-10 08:59:24.207237 | orchestrator | 2025-02-10 08:59:24.207301 | orchestrator | TASK [osism.services.nexus : Calling script setup_realms] ********************** 2025-02-10 08:59:24.207903 | orchestrator | Monday 10 February 2025 08:59:24 +0000 (0:00:00.654) 0:01:53.347 ******* 2025-02-10 08:59:25.254513 | orchestrator | ok: [testbed-manager] 2025-02-10 08:59:25.256421 | orchestrator | 2025-02-10 08:59:25.256565 | orchestrator | TASK [osism.services.nexus : Apply defaults to docker proxy repos] ************* 2025-02-10 08:59:25.338476 | orchestrator | Monday 10 February 2025 08:59:25 +0000 (0:00:01.038) 0:01:54.386 ******* 2025-02-10 08:59:25.338627 | orchestrator | ok: [testbed-manager] 2025-02-10 08:59:25.340336 | orchestrator | 2025-02-10 08:59:25.340382 | orchestrator | TASK [osism.services.nexus : Add docker repositories to global repos list] ***** 2025-02-10 08:59:25.340839 | orchestrator | Monday 10 February 2025 08:59:25 +0000 (0:00:00.091) 0:01:54.477 ******* 2025-02-10 08:59:25.419836 | orchestrator | ok: [testbed-manager] 2025-02-10 08:59:25.420938 | orchestrator | 2025-02-10 08:59:25.422234 | orchestrator | TASK [osism.services.nexus : Apply defaults to apt proxy repos] **************** 2025-02-10 08:59:25.423588 | orchestrator | Monday 10 February 2025 08:59:25 +0000 (0:00:00.083) 0:01:54.561 ******* 2025-02-10 08:59:25.517606 | orchestrator | ok: [testbed-manager] 2025-02-10 08:59:25.518261 | orchestrator | 2025-02-10 08:59:25.518329 | orchestrator | TASK [osism.services.nexus : Add apt repositories to global repos list] ******** 2025-02-10 08:59:25.518964 | orchestrator | Monday 10 February 2025 08:59:25 +0000 (0:00:00.096) 0:01:54.658 ******* 2025-02-10 08:59:25.602289 | orchestrator | ok: [testbed-manager] 2025-02-10 08:59:25.603357 | orchestrator | 2025-02-10 08:59:25.604556 | orchestrator | TASK [osism.services.nexus : Create configured repositories] ******************* 2025-02-10 08:59:25.605309 | orchestrator | Monday 10 February 2025 08:59:25 +0000 (0:00:00.085) 0:01:54.743 ******* 2025-02-10 08:59:25.714336 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/call-script.yml for testbed-manager 2025-02-10 08:59:25.714725 | orchestrator | 2025-02-10 08:59:25.715735 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-10 08:59:25.716563 | orchestrator | Monday 10 February 2025 08:59:25 +0000 (0:00:00.111) 0:01:54.855 ******* 2025-02-10 08:59:25.802820 | orchestrator | ok: [testbed-manager] 2025-02-10 08:59:25.803494 | orchestrator | 2025-02-10 08:59:25.803689 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-10 08:59:25.804086 | orchestrator | Monday 10 February 2025 08:59:25 +0000 (0:00:00.085) 0:01:54.941 ******* 2025-02-10 08:59:25.859705 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:59:25.860343 | orchestrator | 2025-02-10 08:59:25.861434 | orchestrator | TASK [osism.services.nexus : Wait for nexus] *********************************** 2025-02-10 08:59:25.862708 | orchestrator | Monday 10 February 2025 08:59:25 +0000 (0:00:00.059) 0:01:55.000 ******* 2025-02-10 08:59:26.599316 | orchestrator | ok: [testbed-manager] 2025-02-10 08:59:26.600724 | orchestrator | 2025-02-10 08:59:29.483809 | orchestrator | TASK [osism.services.nexus : Calling script create_repos_from_list] ************ 2025-02-10 08:59:29.483989 | orchestrator | Monday 10 February 2025 08:59:26 +0000 (0:00:00.737) 0:01:55.738 ******* 2025-02-10 08:59:29.484031 | orchestrator | ok: [testbed-manager] 2025-02-10 08:59:29.591531 | orchestrator | 2025-02-10 08:59:29.591677 | orchestrator | TASK [Set osism.nexus.status fact] ********************************************* 2025-02-10 08:59:29.591697 | orchestrator | Monday 10 February 2025 08:59:29 +0000 (0:00:02.883) 0:01:58.622 ******* 2025-02-10 08:59:29.591732 | orchestrator | included: osism.commons.state for testbed-manager 2025-02-10 08:59:30.004424 | orchestrator | 2025-02-10 08:59:30.004746 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-02-10 08:59:30.004782 | orchestrator | Monday 10 February 2025 08:59:29 +0000 (0:00:00.110) 0:01:58.732 ******* 2025-02-10 08:59:30.004833 | orchestrator | ok: [testbed-manager] 2025-02-10 08:59:30.005125 | orchestrator | 2025-02-10 08:59:30.005173 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-02-10 08:59:30.005211 | orchestrator | Monday 10 February 2025 08:59:29 +0000 (0:00:00.412) 0:01:59.144 ******* 2025-02-10 08:59:30.562204 | orchestrator | changed: [testbed-manager] 2025-02-10 08:59:30.562818 | orchestrator | 2025-02-10 08:59:30.564388 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 08:59:30.564831 | orchestrator | 2025-02-10 08:59:30 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 08:59:30.564860 | orchestrator | 2025-02-10 08:59:30 | INFO  | Please wait and do not abort execution. 2025-02-10 08:59:30.564882 | orchestrator | testbed-manager : ok=64  changed=14  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-02-10 08:59:30.566192 | orchestrator | 2025-02-10 08:59:30.567020 | orchestrator | Monday 10 February 2025 08:59:30 +0000 (0:00:00.557) 0:01:59.702 ******* 2025-02-10 08:59:30.567907 | orchestrator | =============================================================================== 2025-02-10 08:59:30.568836 | orchestrator | osism.services.nexus : Wait for nexus service to start ----------------- 60.07s 2025-02-10 08:59:30.569043 | orchestrator | osism.services.nexus : Wait for an healthy nexus service --------------- 20.84s 2025-02-10 08:59:30.570092 | orchestrator | osism.services.nexus : Provision scripts included in the container image --- 3.44s 2025-02-10 08:59:30.570710 | orchestrator | osism.services.nexus : Calling script create_repos_from_list ------------ 2.88s 2025-02-10 08:59:30.571337 | orchestrator | osism.services.nexus : Cleanup default repositories --------------------- 2.09s 2025-02-10 08:59:30.572026 | orchestrator | osism.services.nexus : Allow anonymous access --------------------------- 1.94s 2025-02-10 08:59:30.573002 | orchestrator | osism.services.nexus : Copy configuration files ------------------------- 1.94s 2025-02-10 08:59:30.575510 | orchestrator | osism.services.nexus : Calling script update_admin_password ------------- 1.67s 2025-02-10 08:59:30.576789 | orchestrator | osism.services.nexus : Manage nexus service ----------------------------- 1.43s 2025-02-10 08:59:30.577481 | orchestrator | osism.services.nexus : Calling script setup_http_proxy ------------------ 1.06s 2025-02-10 08:59:30.577761 | orchestrator | osism.services.nexus : Calling script setup_realms ---------------------- 1.04s 2025-02-10 08:59:30.578409 | orchestrator | osism.services.nexus : Get setup admin password ------------------------- 1.00s 2025-02-10 08:59:30.580102 | orchestrator | osism.services.nexus : Copy docker-compose.yml file --------------------- 1.00s 2025-02-10 08:59:30.580344 | orchestrator | osism.services.nexus : Stop and disable old service docker-compose@nexus --- 0.97s 2025-02-10 08:59:30.580364 | orchestrator | osism.services.nexus : Create required directories ---------------------- 0.88s 2025-02-10 08:59:30.580375 | orchestrator | osism.services.nexus : Copy nexus systemd unit file --------------------- 0.87s 2025-02-10 08:59:30.580752 | orchestrator | osism.services.nexus : Create traefik external network ------------------ 0.79s 2025-02-10 08:59:30.581031 | orchestrator | osism.services.nexus : Wait for nexus ----------------------------------- 0.74s 2025-02-10 08:59:30.581554 | orchestrator | osism.services.nexus : Wait for nexus ----------------------------------- 0.74s 2025-02-10 08:59:30.581780 | orchestrator | osism.services.nexus : Declaring script update_admin_password ----------- 0.73s 2025-02-10 08:59:31.002664 | orchestrator | + [[ true == \t\r\u\e ]] 2025-02-10 08:59:31.009322 | orchestrator | + sh -c '/opt/configuration/scripts/set-docker-registry.sh nexus.testbed.osism.xyz:8193' 2025-02-10 08:59:31.009427 | orchestrator | + set -e 2025-02-10 08:59:31.017934 | orchestrator | + source /opt/manager-vars.sh 2025-02-10 08:59:31.018000 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-02-10 08:59:31.018067 | orchestrator | ++ NUMBER_OF_NODES=6 2025-02-10 08:59:31.018087 | orchestrator | ++ export CEPH_VERSION=quincy 2025-02-10 08:59:31.018105 | orchestrator | ++ CEPH_VERSION=quincy 2025-02-10 08:59:31.018123 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-02-10 08:59:31.018141 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-02-10 08:59:31.018158 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-02-10 08:59:31.018175 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-02-10 08:59:31.018192 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-02-10 08:59:31.018209 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-02-10 08:59:31.018226 | orchestrator | ++ export ARA=false 2025-02-10 08:59:31.018242 | orchestrator | ++ ARA=false 2025-02-10 08:59:31.018260 | orchestrator | ++ export TEMPEST=false 2025-02-10 08:59:31.018275 | orchestrator | ++ TEMPEST=false 2025-02-10 08:59:31.018290 | orchestrator | ++ export IS_ZUUL=true 2025-02-10 08:59:31.018300 | orchestrator | ++ IS_ZUUL=true 2025-02-10 08:59:31.018310 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.162 2025-02-10 08:59:31.018321 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.162 2025-02-10 08:59:31.018330 | orchestrator | ++ export EXTERNAL_API=false 2025-02-10 08:59:31.018340 | orchestrator | ++ EXTERNAL_API=false 2025-02-10 08:59:31.018349 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-02-10 08:59:31.018370 | orchestrator | ++ IMAGE_USER=ubuntu 2025-02-10 08:59:31.018380 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-02-10 08:59:31.018389 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-02-10 08:59:31.018399 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-02-10 08:59:31.018409 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-02-10 08:59:31.018418 | orchestrator | + DOCKER_REGISTRY=nexus.testbed.osism.xyz:8193 2025-02-10 08:59:31.018428 | orchestrator | + sed -i 's#ceph_docker_registry: .*#ceph_docker_registry: nexus.testbed.osism.xyz:8193#g' /opt/configuration/inventory/group_vars/all/registries.yml 2025-02-10 08:59:31.018514 | orchestrator | + sed -i 's#docker_registry_ansible: .*#docker_registry_ansible: nexus.testbed.osism.xyz:8193#g' /opt/configuration/inventory/group_vars/all/registries.yml 2025-02-10 08:59:31.023049 | orchestrator | + sed -i 's#docker_registry_kolla: .*#docker_registry_kolla: nexus.testbed.osism.xyz:8193#g' /opt/configuration/inventory/group_vars/all/registries.yml 2025-02-10 08:59:31.027407 | orchestrator | + sed -i 's#docker_registry_netbox: .*#docker_registry_netbox: nexus.testbed.osism.xyz:8193#g' /opt/configuration/inventory/group_vars/all/registries.yml 2025-02-10 08:59:31.032613 | orchestrator | + [[ nexus.testbed.osism.xyz:8193 == \o\s\i\s\m\.\h\a\r\b\o\r\.\r\e\g\i\o\.\d\i\g\i\t\a\l ]] 2025-02-10 08:59:31.033310 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-02-10 08:59:31.038086 | orchestrator | + sed -i 's#docker_namespace: osism#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-02-10 08:59:31.038120 | orchestrator | + osism apply squid 2025-02-10 08:59:32.716034 | orchestrator | 2025-02-10 08:59:32 | INFO  | Task d4441286-4daf-4572-b3e2-3acbb46aa342 (squid) was prepared for execution. 2025-02-10 08:59:32.718626 | orchestrator | 2025-02-10 08:59:32 | INFO  | It takes a moment until task d4441286-4daf-4572-b3e2-3acbb46aa342 (squid) has been started and output is visible here. 2025-02-10 08:59:35.935369 | orchestrator | 2025-02-10 08:59:35.936169 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-02-10 08:59:35.936266 | orchestrator | 2025-02-10 08:59:35.937367 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-02-10 08:59:35.937534 | orchestrator | Monday 10 February 2025 08:59:35 +0000 (0:00:00.111) 0:00:00.111 ******* 2025-02-10 08:59:36.040789 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-02-10 08:59:36.041790 | orchestrator | 2025-02-10 08:59:36.042887 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-02-10 08:59:36.042937 | orchestrator | Monday 10 February 2025 08:59:36 +0000 (0:00:00.106) 0:00:00.218 ******* 2025-02-10 08:59:37.475726 | orchestrator | ok: [testbed-manager] 2025-02-10 08:59:37.476720 | orchestrator | 2025-02-10 08:59:37.477164 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-02-10 08:59:37.477200 | orchestrator | Monday 10 February 2025 08:59:37 +0000 (0:00:01.434) 0:00:01.652 ******* 2025-02-10 08:59:38.674899 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-02-10 08:59:38.679641 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-02-10 08:59:38.679792 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-02-10 08:59:38.680809 | orchestrator | 2025-02-10 08:59:38.681188 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-02-10 08:59:38.682197 | orchestrator | Monday 10 February 2025 08:59:38 +0000 (0:00:01.198) 0:00:02.851 ******* 2025-02-10 08:59:39.761244 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-02-10 08:59:39.764764 | orchestrator | 2025-02-10 08:59:39.764878 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-02-10 08:59:39.765818 | orchestrator | Monday 10 February 2025 08:59:39 +0000 (0:00:01.086) 0:00:03.937 ******* 2025-02-10 08:59:40.150743 | orchestrator | ok: [testbed-manager] 2025-02-10 08:59:40.153145 | orchestrator | 2025-02-10 08:59:40.155807 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-02-10 08:59:40.155890 | orchestrator | Monday 10 February 2025 08:59:40 +0000 (0:00:00.388) 0:00:04.326 ******* 2025-02-10 08:59:41.150767 | orchestrator | changed: [testbed-manager] 2025-02-10 08:59:41.151338 | orchestrator | 2025-02-10 08:59:41.151559 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-02-10 08:59:41.151649 | orchestrator | Monday 10 February 2025 08:59:41 +0000 (0:00:00.999) 0:00:05.326 ******* 2025-02-10 09:00:11.325703 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-02-10 09:00:11.326389 | orchestrator | ok: [testbed-manager] 2025-02-10 09:00:11.326489 | orchestrator | 2025-02-10 09:00:11.326918 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-02-10 09:00:11.327601 | orchestrator | Monday 10 February 2025 09:00:11 +0000 (0:00:30.171) 0:00:35.497 ******* 2025-02-10 09:00:23.758821 | orchestrator | changed: [testbed-manager] 2025-02-10 09:01:23.852239 | orchestrator | 2025-02-10 09:01:23.852399 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-02-10 09:01:23.852490 | orchestrator | Monday 10 February 2025 09:00:23 +0000 (0:00:12.433) 0:00:47.931 ******* 2025-02-10 09:01:23.852529 | orchestrator | Pausing for 60 seconds 2025-02-10 09:01:23.852874 | orchestrator | changed: [testbed-manager] 2025-02-10 09:01:23.852903 | orchestrator | 2025-02-10 09:01:23.852920 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-02-10 09:01:23.852944 | orchestrator | Monday 10 February 2025 09:01:23 +0000 (0:01:00.093) 0:01:48.025 ******* 2025-02-10 09:01:23.920025 | orchestrator | ok: [testbed-manager] 2025-02-10 09:01:23.920359 | orchestrator | 2025-02-10 09:01:23.920794 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-02-10 09:01:23.921659 | orchestrator | Monday 10 February 2025 09:01:23 +0000 (0:00:00.070) 0:01:48.096 ******* 2025-02-10 09:01:24.564342 | orchestrator | changed: [testbed-manager] 2025-02-10 09:01:24.564891 | orchestrator | 2025-02-10 09:01:24.565914 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:01:24.566868 | orchestrator | 2025-02-10 09:01:24 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:01:24.567873 | orchestrator | 2025-02-10 09:01:24 | INFO  | Please wait and do not abort execution. 2025-02-10 09:01:24.568490 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:01:24.569499 | orchestrator | 2025-02-10 09:01:24.570242 | orchestrator | Monday 10 February 2025 09:01:24 +0000 (0:00:00.644) 0:01:48.740 ******* 2025-02-10 09:01:24.570865 | orchestrator | =============================================================================== 2025-02-10 09:01:24.571693 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2025-02-10 09:01:24.572192 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 30.17s 2025-02-10 09:01:24.572979 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.43s 2025-02-10 09:01:24.573562 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.43s 2025-02-10 09:01:24.574077 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.20s 2025-02-10 09:01:24.575027 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.09s 2025-02-10 09:01:24.575720 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 1.00s 2025-02-10 09:01:24.576409 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.64s 2025-02-10 09:01:24.577182 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.39s 2025-02-10 09:01:24.577530 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.11s 2025-02-10 09:01:24.577964 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-02-10 09:01:25.027570 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-02-10 09:01:26.549548 | orchestrator | 2025-02-10 09:01:26 | INFO  | Task 7fd997bd-11d9-42e2-9efd-e8c5b073c48b (operator) was prepared for execution. 2025-02-10 09:01:29.460826 | orchestrator | 2025-02-10 09:01:26 | INFO  | It takes a moment until task 7fd997bd-11d9-42e2-9efd-e8c5b073c48b (operator) has been started and output is visible here. 2025-02-10 09:01:29.460957 | orchestrator | 2025-02-10 09:01:33.085485 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-02-10 09:01:33.085690 | orchestrator | 2025-02-10 09:01:33.085714 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-10 09:01:33.085737 | orchestrator | Monday 10 February 2025 09:01:29 +0000 (0:00:00.083) 0:00:00.083 ******* 2025-02-10 09:01:33.085781 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:01:33.085882 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:01:33.085914 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:01:33.087482 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:01:33.088750 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:01:33.090126 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:01:33.091173 | orchestrator | 2025-02-10 09:01:33.091807 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-02-10 09:01:33.092570 | orchestrator | Monday 10 February 2025 09:01:33 +0000 (0:00:03.633) 0:00:03.717 ******* 2025-02-10 09:01:33.872989 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:01:33.874660 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:01:33.875649 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:01:33.876836 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:01:33.877782 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:01:33.878708 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:01:33.879510 | orchestrator | 2025-02-10 09:01:33.880205 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-02-10 09:01:33.880754 | orchestrator | 2025-02-10 09:01:33.881520 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-02-10 09:01:33.881713 | orchestrator | Monday 10 February 2025 09:01:33 +0000 (0:00:00.790) 0:00:04.507 ******* 2025-02-10 09:01:33.940122 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:01:33.961974 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:01:34.003789 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:01:34.066158 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:01:34.067120 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:01:34.069831 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:01:34.070850 | orchestrator | 2025-02-10 09:01:34.072183 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-02-10 09:01:34.073630 | orchestrator | Monday 10 February 2025 09:01:34 +0000 (0:00:00.191) 0:00:04.699 ******* 2025-02-10 09:01:34.134708 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:01:34.155921 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:01:34.181515 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:01:34.227792 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:01:34.228882 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:01:34.230257 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:01:34.231035 | orchestrator | 2025-02-10 09:01:34.232038 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-02-10 09:01:34.232577 | orchestrator | Monday 10 February 2025 09:01:34 +0000 (0:00:00.163) 0:00:04.862 ******* 2025-02-10 09:01:34.794669 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:01:34.795628 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:01:34.795686 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:01:34.796155 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:01:34.797139 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:01:34.797609 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:01:34.798330 | orchestrator | 2025-02-10 09:01:34.798735 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-02-10 09:01:34.799155 | orchestrator | Monday 10 February 2025 09:01:34 +0000 (0:00:00.565) 0:00:05.427 ******* 2025-02-10 09:01:35.542247 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:01:35.544906 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:01:35.545170 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:01:35.545621 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:01:35.546151 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:01:35.546831 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:01:35.547275 | orchestrator | 2025-02-10 09:01:35.547644 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-02-10 09:01:35.548271 | orchestrator | Monday 10 February 2025 09:01:35 +0000 (0:00:00.747) 0:00:06.175 ******* 2025-02-10 09:01:36.833774 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-02-10 09:01:36.833964 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-02-10 09:01:36.834138 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-02-10 09:01:36.834833 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-02-10 09:01:36.836411 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-02-10 09:01:36.837940 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-02-10 09:01:36.839452 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-02-10 09:01:36.840509 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-02-10 09:01:36.840839 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-02-10 09:01:36.841952 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-02-10 09:01:36.842303 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-02-10 09:01:36.842784 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-02-10 09:01:36.843444 | orchestrator | 2025-02-10 09:01:36.844056 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-02-10 09:01:36.844682 | orchestrator | Monday 10 February 2025 09:01:36 +0000 (0:00:01.290) 0:00:07.465 ******* 2025-02-10 09:01:38.046392 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:01:38.046731 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:01:38.046770 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:01:38.047838 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:01:38.048980 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:01:38.049859 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:01:38.050454 | orchestrator | 2025-02-10 09:01:38.051025 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-02-10 09:01:38.052005 | orchestrator | Monday 10 February 2025 09:01:38 +0000 (0:00:01.213) 0:00:08.679 ******* 2025-02-10 09:01:39.269567 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-02-10 09:01:39.271523 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-02-10 09:01:39.271655 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-02-10 09:01:39.424534 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-02-10 09:01:39.425269 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-02-10 09:01:39.426014 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-02-10 09:01:39.426927 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-02-10 09:01:39.428219 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-02-10 09:01:39.428772 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-02-10 09:01:39.429884 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-02-10 09:01:39.430367 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-02-10 09:01:39.431243 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-02-10 09:01:39.431658 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-02-10 09:01:39.431968 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-02-10 09:01:39.432479 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-02-10 09:01:39.432953 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-02-10 09:01:39.433690 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-02-10 09:01:39.434194 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-02-10 09:01:39.434763 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-02-10 09:01:39.435451 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-02-10 09:01:39.436100 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-02-10 09:01:39.436611 | orchestrator | 2025-02-10 09:01:39.437071 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-02-10 09:01:39.437479 | orchestrator | Monday 10 February 2025 09:01:39 +0000 (0:00:01.377) 0:00:10.057 ******* 2025-02-10 09:01:40.135636 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:01:40.135836 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:01:40.135865 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:01:40.139713 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:01:40.142509 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:01:40.143066 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:01:40.144402 | orchestrator | 2025-02-10 09:01:40.145514 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-02-10 09:01:40.146509 | orchestrator | Monday 10 February 2025 09:01:40 +0000 (0:00:00.710) 0:00:10.768 ******* 2025-02-10 09:01:40.228197 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:01:40.251822 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:01:40.297850 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:01:40.299010 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:01:40.300483 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:01:40.300879 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:01:40.301965 | orchestrator | 2025-02-10 09:01:40.302726 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-02-10 09:01:40.303706 | orchestrator | Monday 10 February 2025 09:01:40 +0000 (0:00:00.162) 0:00:10.930 ******* 2025-02-10 09:01:41.071150 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-02-10 09:01:41.071388 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-02-10 09:01:41.071866 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:01:41.072669 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:01:41.073344 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-02-10 09:01:41.073665 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-02-10 09:01:41.074113 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-02-10 09:01:41.075348 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:01:41.076638 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:01:41.076673 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:01:41.078161 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-02-10 09:01:41.078238 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:01:41.079180 | orchestrator | 2025-02-10 09:01:41.080023 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-02-10 09:01:41.081106 | orchestrator | Monday 10 February 2025 09:01:41 +0000 (0:00:00.773) 0:00:11.704 ******* 2025-02-10 09:01:41.148643 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:01:41.184550 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:01:41.207927 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:01:41.254243 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:01:41.254503 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:01:41.254964 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:01:41.255802 | orchestrator | 2025-02-10 09:01:41.256066 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-02-10 09:01:41.257324 | orchestrator | Monday 10 February 2025 09:01:41 +0000 (0:00:00.183) 0:00:11.888 ******* 2025-02-10 09:01:41.325716 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:01:41.344696 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:01:41.372234 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:01:41.413107 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:01:41.414158 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:01:41.414834 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:01:41.415402 | orchestrator | 2025-02-10 09:01:41.415909 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-02-10 09:01:41.416736 | orchestrator | Monday 10 February 2025 09:01:41 +0000 (0:00:00.158) 0:00:12.046 ******* 2025-02-10 09:01:41.471241 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:01:41.540073 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:01:41.569355 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:01:41.610485 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:01:41.610787 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:01:41.611947 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:01:41.612013 | orchestrator | 2025-02-10 09:01:41.612641 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-02-10 09:01:41.613127 | orchestrator | Monday 10 February 2025 09:01:41 +0000 (0:00:00.197) 0:00:12.244 ******* 2025-02-10 09:01:42.321111 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:01:42.322513 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:01:42.322912 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:01:42.324224 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:01:42.325192 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:01:42.325947 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:01:42.326549 | orchestrator | 2025-02-10 09:01:42.327056 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-02-10 09:01:42.327503 | orchestrator | Monday 10 February 2025 09:01:42 +0000 (0:00:00.709) 0:00:12.953 ******* 2025-02-10 09:01:42.417859 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:01:42.435261 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:01:42.564895 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:01:42.565127 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:01:42.565863 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:01:42.565929 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:01:42.566239 | orchestrator | 2025-02-10 09:01:42.567071 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:01:42.567177 | orchestrator | 2025-02-10 09:01:42 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:01:42.567500 | orchestrator | 2025-02-10 09:01:42 | INFO  | Please wait and do not abort execution. 2025-02-10 09:01:42.567549 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-10 09:01:42.568178 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-10 09:01:42.568271 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-10 09:01:42.568347 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-10 09:01:42.569157 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-10 09:01:42.569941 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-10 09:01:42.571054 | orchestrator | 2025-02-10 09:01:42.571099 | orchestrator | Monday 10 February 2025 09:01:42 +0000 (0:00:00.246) 0:00:13.199 ******* 2025-02-10 09:01:42.571385 | orchestrator | =============================================================================== 2025-02-10 09:01:42.571991 | orchestrator | Gathering Facts --------------------------------------------------------- 3.63s 2025-02-10 09:01:42.572175 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.38s 2025-02-10 09:01:42.572854 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.29s 2025-02-10 09:01:42.573031 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.21s 2025-02-10 09:01:42.573527 | orchestrator | Do not require tty for all users ---------------------------------------- 0.79s 2025-02-10 09:01:42.573558 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.77s 2025-02-10 09:01:42.574630 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.75s 2025-02-10 09:01:42.575629 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.71s 2025-02-10 09:01:42.577017 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.71s 2025-02-10 09:01:42.577111 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.57s 2025-02-10 09:01:42.577844 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.25s 2025-02-10 09:01:42.577968 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.20s 2025-02-10 09:01:42.578514 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.19s 2025-02-10 09:01:42.578825 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.18s 2025-02-10 09:01:42.579367 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.16s 2025-02-10 09:01:42.580005 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.16s 2025-02-10 09:01:42.580306 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2025-02-10 09:01:43.033051 | orchestrator | + osism apply --environment custom facts 2025-02-10 09:01:44.554469 | orchestrator | 2025-02-10 09:01:44 | INFO  | Trying to run play facts in environment custom 2025-02-10 09:01:44.605079 | orchestrator | 2025-02-10 09:01:44 | INFO  | Task c1d70f50-4e43-4acb-93d3-ff6fb0fca4a2 (facts) was prepared for execution. 2025-02-10 09:01:47.845083 | orchestrator | 2025-02-10 09:01:44 | INFO  | It takes a moment until task c1d70f50-4e43-4acb-93d3-ff6fb0fca4a2 (facts) has been started and output is visible here. 2025-02-10 09:01:47.845286 | orchestrator | 2025-02-10 09:01:47.847077 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-02-10 09:01:47.847143 | orchestrator | 2025-02-10 09:01:49.310494 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-02-10 09:01:49.310672 | orchestrator | Monday 10 February 2025 09:01:47 +0000 (0:00:00.112) 0:00:00.112 ******* 2025-02-10 09:01:49.310748 | orchestrator | ok: [testbed-manager] 2025-02-10 09:01:49.310856 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:01:49.311600 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:01:49.312859 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:01:49.314093 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:01:49.314432 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:01:49.315169 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:01:49.315870 | orchestrator | 2025-02-10 09:01:49.316299 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-02-10 09:01:49.316906 | orchestrator | Monday 10 February 2025 09:01:49 +0000 (0:00:01.466) 0:00:01.578 ******* 2025-02-10 09:01:50.575998 | orchestrator | ok: [testbed-manager] 2025-02-10 09:01:50.576573 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:01:50.577483 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:01:50.578906 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:01:50.580044 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:01:50.580301 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:01:50.581191 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:01:50.582080 | orchestrator | 2025-02-10 09:01:50.582727 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-02-10 09:01:50.585798 | orchestrator | 2025-02-10 09:01:50.586674 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-02-10 09:01:50.587516 | orchestrator | Monday 10 February 2025 09:01:50 +0000 (0:00:01.264) 0:00:02.843 ******* 2025-02-10 09:01:50.677912 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:01:50.678200 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:01:50.678686 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:01:50.679345 | orchestrator | 2025-02-10 09:01:50.680645 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-02-10 09:01:50.821984 | orchestrator | Monday 10 February 2025 09:01:50 +0000 (0:00:00.105) 0:00:02.949 ******* 2025-02-10 09:01:50.822213 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:01:50.822539 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:01:50.822661 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:01:50.822696 | orchestrator | 2025-02-10 09:01:50.823668 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-02-10 09:01:50.823733 | orchestrator | Monday 10 February 2025 09:01:50 +0000 (0:00:00.144) 0:00:03.093 ******* 2025-02-10 09:01:50.959896 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:01:50.960332 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:01:50.961998 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:01:50.962314 | orchestrator | 2025-02-10 09:01:50.962345 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-02-10 09:01:50.962369 | orchestrator | Monday 10 February 2025 09:01:50 +0000 (0:00:00.137) 0:00:03.231 ******* 2025-02-10 09:01:51.114808 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:01:51.116128 | orchestrator | 2025-02-10 09:01:51.635844 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-02-10 09:01:51.636023 | orchestrator | Monday 10 February 2025 09:01:51 +0000 (0:00:00.155) 0:00:03.386 ******* 2025-02-10 09:01:51.636082 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:01:51.636345 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:01:51.636406 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:01:51.637110 | orchestrator | 2025-02-10 09:01:51.637159 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-02-10 09:01:51.638226 | orchestrator | Monday 10 February 2025 09:01:51 +0000 (0:00:00.519) 0:00:03.905 ******* 2025-02-10 09:01:51.751876 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:01:51.752672 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:01:51.754852 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:01:51.755325 | orchestrator | 2025-02-10 09:01:51.756510 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-02-10 09:01:51.756791 | orchestrator | Monday 10 February 2025 09:01:51 +0000 (0:00:00.117) 0:00:04.023 ******* 2025-02-10 09:01:52.750842 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:01:52.751368 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:01:52.751408 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:01:52.751597 | orchestrator | 2025-02-10 09:01:52.752090 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-02-10 09:01:52.752673 | orchestrator | Monday 10 February 2025 09:01:52 +0000 (0:00:00.996) 0:00:05.020 ******* 2025-02-10 09:01:53.246583 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:01:53.246843 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:01:53.247581 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:01:53.249024 | orchestrator | 2025-02-10 09:01:53.250247 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-02-10 09:01:53.251197 | orchestrator | Monday 10 February 2025 09:01:53 +0000 (0:00:00.494) 0:00:05.514 ******* 2025-02-10 09:01:54.320706 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:01:54.321322 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:01:54.321622 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:01:54.322644 | orchestrator | 2025-02-10 09:01:54.324147 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-02-10 09:01:54.325598 | orchestrator | Monday 10 February 2025 09:01:54 +0000 (0:00:01.073) 0:00:06.587 ******* 2025-02-10 09:02:08.036324 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:02:08.103455 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:02:08.103597 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:02:08.103618 | orchestrator | 2025-02-10 09:02:08.103635 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-02-10 09:02:08.103651 | orchestrator | Monday 10 February 2025 09:02:08 +0000 (0:00:13.716) 0:00:20.304 ******* 2025-02-10 09:02:08.103711 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:02:08.134617 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:02:08.135495 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:02:08.139459 | orchestrator | 2025-02-10 09:02:08.140445 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-02-10 09:02:08.142795 | orchestrator | Monday 10 February 2025 09:02:08 +0000 (0:00:00.101) 0:00:20.406 ******* 2025-02-10 09:02:16.351945 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:02:16.352152 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:02:16.352185 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:02:16.352437 | orchestrator | 2025-02-10 09:02:16.353105 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-02-10 09:02:16.353364 | orchestrator | Monday 10 February 2025 09:02:16 +0000 (0:00:08.215) 0:00:28.621 ******* 2025-02-10 09:02:16.830140 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:02:16.830883 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:02:16.830925 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:02:16.831618 | orchestrator | 2025-02-10 09:02:16.832199 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-02-10 09:02:16.832822 | orchestrator | Monday 10 February 2025 09:02:16 +0000 (0:00:00.480) 0:00:29.101 ******* 2025-02-10 09:02:20.442928 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-02-10 09:02:20.443365 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-02-10 09:02:20.444293 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-02-10 09:02:20.444343 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-02-10 09:02:20.445046 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-02-10 09:02:20.445939 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-02-10 09:02:20.446847 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-02-10 09:02:20.447072 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-02-10 09:02:20.447988 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-02-10 09:02:20.449034 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-02-10 09:02:20.449720 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-02-10 09:02:20.450287 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-02-10 09:02:20.450791 | orchestrator | 2025-02-10 09:02:20.451739 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-02-10 09:02:20.452588 | orchestrator | Monday 10 February 2025 09:02:20 +0000 (0:00:03.609) 0:00:32.711 ******* 2025-02-10 09:02:21.728247 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:02:21.728568 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:02:21.729752 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:02:21.731078 | orchestrator | 2025-02-10 09:02:21.732617 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-02-10 09:02:21.733726 | orchestrator | 2025-02-10 09:02:21.734845 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-02-10 09:02:21.736487 | orchestrator | Monday 10 February 2025 09:02:21 +0000 (0:00:01.285) 0:00:33.997 ******* 2025-02-10 09:02:25.686208 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:02:25.686390 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:02:25.687226 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:02:25.687255 | orchestrator | ok: [testbed-manager] 2025-02-10 09:02:25.687273 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:02:25.687289 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:02:25.687305 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:02:25.687321 | orchestrator | 2025-02-10 09:02:25.687339 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:02:25.687357 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:02:25.687375 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:02:25.687393 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:02:25.687438 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:02:25.687455 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:02:25.687471 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:02:25.687512 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:02:25.687527 | orchestrator | 2025-02-10 09:02:25.687542 | orchestrator | Monday 10 February 2025 09:02:25 +0000 (0:00:03.950) 0:00:37.947 ******* 2025-02-10 09:02:25.687556 | orchestrator | =============================================================================== 2025-02-10 09:02:25.687570 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.72s 2025-02-10 09:02:25.687584 | orchestrator | Install required packages (Debian) -------------------------------------- 8.22s 2025-02-10 09:02:25.687598 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.95s 2025-02-10 09:02:25.687613 | orchestrator | Copy fact files --------------------------------------------------------- 3.61s 2025-02-10 09:02:25.687627 | orchestrator | Create custom facts directory ------------------------------------------- 1.47s 2025-02-10 09:02:25.687641 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.29s 2025-02-10 09:02:25.687665 | orchestrator | Copy fact file ---------------------------------------------------------- 1.26s 2025-02-10 09:02:25.687680 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.07s 2025-02-10 09:02:25.687725 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.00s 2025-02-10 09:02:25.687740 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.52s 2025-02-10 09:02:25.687754 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.49s 2025-02-10 09:02:25.687768 | orchestrator | Create custom facts directory ------------------------------------------- 0.48s 2025-02-10 09:02:25.687792 | orchestrator | 2025-02-10 09:02:25 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:02:26.148284 | orchestrator | 2025-02-10 09:02:25 | INFO  | Please wait and do not abort execution. 2025-02-10 09:02:26.148473 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.16s 2025-02-10 09:02:26.148495 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.14s 2025-02-10 09:02:26.148511 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.14s 2025-02-10 09:02:26.148525 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.12s 2025-02-10 09:02:26.148540 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2025-02-10 09:02:26.148554 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2025-02-10 09:02:26.148618 | orchestrator | + osism apply bootstrap 2025-02-10 09:02:27.693036 | orchestrator | 2025-02-10 09:02:27 | INFO  | Task deda3a4b-f7ad-4dd0-b9c5-328662b38cd7 (bootstrap) was prepared for execution. 2025-02-10 09:02:31.055981 | orchestrator | 2025-02-10 09:02:27 | INFO  | It takes a moment until task deda3a4b-f7ad-4dd0-b9c5-328662b38cd7 (bootstrap) has been started and output is visible here. 2025-02-10 09:02:31.056142 | orchestrator | 2025-02-10 09:02:31.058004 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-02-10 09:02:31.062193 | orchestrator | 2025-02-10 09:02:31.062224 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-02-10 09:02:31.062244 | orchestrator | Monday 10 February 2025 09:02:31 +0000 (0:00:00.113) 0:00:00.113 ******* 2025-02-10 09:02:31.136366 | orchestrator | ok: [testbed-manager] 2025-02-10 09:02:31.166664 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:02:31.200313 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:02:31.228497 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:02:31.330116 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:02:31.333511 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:02:31.334067 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:02:31.334094 | orchestrator | 2025-02-10 09:02:31.334103 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-02-10 09:02:31.334118 | orchestrator | 2025-02-10 09:02:31.334989 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-02-10 09:02:31.336063 | orchestrator | Monday 10 February 2025 09:02:31 +0000 (0:00:00.277) 0:00:00.390 ******* 2025-02-10 09:02:35.171529 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:02:35.171927 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:02:35.171953 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:02:35.171965 | orchestrator | ok: [testbed-manager] 2025-02-10 09:02:35.172024 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:02:35.172712 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:02:35.173092 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:02:35.175722 | orchestrator | 2025-02-10 09:02:35.175889 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-02-10 09:02:35.176219 | orchestrator | 2025-02-10 09:02:35.176791 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-02-10 09:02:35.176992 | orchestrator | Monday 10 February 2025 09:02:35 +0000 (0:00:03.831) 0:00:04.222 ******* 2025-02-10 09:02:35.269196 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-02-10 09:02:35.315106 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-02-10 09:02:35.315273 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-02-10 09:02:35.315314 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-10 09:02:35.318968 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-02-10 09:02:35.357744 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-10 09:02:35.357874 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-02-10 09:02:35.357891 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-10 09:02:35.357905 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-02-10 09:02:35.357938 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-10 09:02:35.603728 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-02-10 09:02:35.603887 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-02-10 09:02:35.603967 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-02-10 09:02:35.605238 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-02-10 09:02:35.605829 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:02:35.605863 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-10 09:02:35.606938 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-02-10 09:02:35.607631 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-02-10 09:02:35.607846 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-10 09:02:35.609226 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-10 09:02:35.609517 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-02-10 09:02:35.609999 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:02:35.611553 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-02-10 09:02:35.612473 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-10 09:02:35.613712 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-02-10 09:02:35.614570 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-02-10 09:02:35.616130 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-10 09:02:35.617602 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-02-10 09:02:35.617682 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-02-10 09:02:35.617704 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:02:35.618498 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-02-10 09:02:35.619126 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-02-10 09:02:35.619555 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-02-10 09:02:35.620450 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:02:35.621024 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-02-10 09:02:35.622495 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-02-10 09:02:35.622677 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-02-10 09:02:35.622701 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:02:35.622717 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:02:35.622762 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-02-10 09:02:35.623232 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-02-10 09:02:35.623836 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-02-10 09:02:35.624310 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-02-10 09:02:35.624778 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-02-10 09:02:35.625219 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-02-10 09:02:35.625863 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-02-10 09:02:35.626301 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-02-10 09:02:35.626844 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-02-10 09:02:35.627144 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-02-10 09:02:35.627846 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:02:35.628043 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-02-10 09:02:35.628519 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-02-10 09:02:35.629894 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:02:35.630343 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:02:35.636879 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-02-10 09:02:35.677845 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:02:35.677966 | orchestrator | 2025-02-10 09:02:35.677984 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-02-10 09:02:35.677999 | orchestrator | 2025-02-10 09:02:35.678014 | orchestrator | TASK [osism.commons.hostname : Set hostname_name fact] ************************* 2025-02-10 09:02:35.678098 | orchestrator | Monday 10 February 2025 09:02:35 +0000 (0:00:00.442) 0:00:04.665 ******* 2025-02-10 09:02:35.678132 | orchestrator | ok: [testbed-manager] 2025-02-10 09:02:35.705301 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:02:35.741460 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:02:35.765023 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:02:35.831023 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:02:35.832212 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:02:35.833685 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:02:35.834864 | orchestrator | 2025-02-10 09:02:35.835849 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-02-10 09:02:35.836725 | orchestrator | Monday 10 February 2025 09:02:35 +0000 (0:00:00.225) 0:00:04.891 ******* 2025-02-10 09:02:37.089314 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:02:37.089934 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:02:37.091087 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:02:37.092698 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:02:37.093735 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:02:37.094527 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:02:37.095236 | orchestrator | ok: [testbed-manager] 2025-02-10 09:02:37.096127 | orchestrator | 2025-02-10 09:02:37.096605 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-02-10 09:02:37.097048 | orchestrator | Monday 10 February 2025 09:02:37 +0000 (0:00:01.257) 0:00:06.149 ******* 2025-02-10 09:02:38.490123 | orchestrator | ok: [testbed-manager] 2025-02-10 09:02:38.491120 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:02:38.491221 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:02:38.493527 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:02:38.493833 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:02:38.495165 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:02:38.495904 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:02:38.496652 | orchestrator | 2025-02-10 09:02:38.497426 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-02-10 09:02:38.498095 | orchestrator | Monday 10 February 2025 09:02:38 +0000 (0:00:01.397) 0:00:07.546 ******* 2025-02-10 09:02:38.791195 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:02:38.793538 | orchestrator | 2025-02-10 09:02:38.794238 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-02-10 09:02:38.795227 | orchestrator | Monday 10 February 2025 09:02:38 +0000 (0:00:00.304) 0:00:07.851 ******* 2025-02-10 09:02:40.852373 | orchestrator | changed: [testbed-manager] 2025-02-10 09:02:40.852680 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:02:40.853736 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:02:40.854167 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:02:40.855022 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:02:40.855554 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:02:40.857534 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:02:40.857994 | orchestrator | 2025-02-10 09:02:40.858483 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-02-10 09:02:40.858923 | orchestrator | Monday 10 February 2025 09:02:40 +0000 (0:00:02.062) 0:00:09.913 ******* 2025-02-10 09:02:40.928619 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:02:41.144465 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:02:41.148244 | orchestrator | 2025-02-10 09:02:42.277480 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-02-10 09:02:42.277630 | orchestrator | Monday 10 February 2025 09:02:41 +0000 (0:00:00.291) 0:00:10.204 ******* 2025-02-10 09:02:42.277671 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:02:42.278496 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:02:42.278534 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:02:42.279344 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:02:42.279376 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:02:42.280551 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:02:42.281600 | orchestrator | 2025-02-10 09:02:42.282315 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-02-10 09:02:42.282548 | orchestrator | Monday 10 February 2025 09:02:42 +0000 (0:00:01.132) 0:00:11.337 ******* 2025-02-10 09:02:42.362205 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:02:42.878861 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:02:42.880237 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:02:42.881646 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:02:42.882601 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:02:42.883361 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:02:42.884047 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:02:42.884698 | orchestrator | 2025-02-10 09:02:42.885388 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-02-10 09:02:42.885758 | orchestrator | Monday 10 February 2025 09:02:42 +0000 (0:00:00.602) 0:00:11.939 ******* 2025-02-10 09:02:42.986223 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:02:43.006258 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:02:43.028969 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:02:43.348887 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:02:43.352512 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:02:43.352605 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:02:43.355676 | orchestrator | ok: [testbed-manager] 2025-02-10 09:02:43.356643 | orchestrator | 2025-02-10 09:02:43.357365 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-02-10 09:02:43.358457 | orchestrator | Monday 10 February 2025 09:02:43 +0000 (0:00:00.468) 0:00:12.408 ******* 2025-02-10 09:02:43.423811 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:02:43.450259 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:02:43.481090 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:02:43.503601 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:02:43.585485 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:02:43.586709 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:02:43.587260 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:02:43.588543 | orchestrator | 2025-02-10 09:02:43.588916 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-02-10 09:02:43.590112 | orchestrator | Monday 10 February 2025 09:02:43 +0000 (0:00:00.238) 0:00:12.646 ******* 2025-02-10 09:02:43.906980 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:02:43.908056 | orchestrator | 2025-02-10 09:02:43.909961 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-02-10 09:02:44.267035 | orchestrator | Monday 10 February 2025 09:02:43 +0000 (0:00:00.321) 0:00:12.968 ******* 2025-02-10 09:02:44.267205 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:02:44.269882 | orchestrator | 2025-02-10 09:02:44.271760 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-02-10 09:02:45.652349 | orchestrator | Monday 10 February 2025 09:02:44 +0000 (0:00:00.358) 0:00:13.326 ******* 2025-02-10 09:02:45.652577 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:02:45.652644 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:02:45.652815 | orchestrator | ok: [testbed-manager] 2025-02-10 09:02:45.652833 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:02:45.652884 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:02:45.653633 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:02:45.654496 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:02:45.654685 | orchestrator | 2025-02-10 09:02:45.654715 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-02-10 09:02:45.655003 | orchestrator | Monday 10 February 2025 09:02:45 +0000 (0:00:01.386) 0:00:14.713 ******* 2025-02-10 09:02:45.737963 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:02:45.767250 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:02:45.791608 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:02:45.816665 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:02:45.886524 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:02:45.886723 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:02:45.886747 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:02:45.886768 | orchestrator | 2025-02-10 09:02:45.888244 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-02-10 09:02:45.888552 | orchestrator | Monday 10 February 2025 09:02:45 +0000 (0:00:00.231) 0:00:14.944 ******* 2025-02-10 09:02:46.570378 | orchestrator | ok: [testbed-manager] 2025-02-10 09:02:46.571223 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:02:46.571904 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:02:46.572713 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:02:46.573247 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:02:46.573861 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:02:46.574629 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:02:46.575116 | orchestrator | 2025-02-10 09:02:46.575787 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-02-10 09:02:46.576385 | orchestrator | Monday 10 February 2025 09:02:46 +0000 (0:00:00.685) 0:00:15.630 ******* 2025-02-10 09:02:46.679024 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:02:46.712061 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:02:46.738283 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:02:46.763698 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:02:46.840794 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:02:46.841386 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:02:46.842574 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:02:46.844687 | orchestrator | 2025-02-10 09:02:46.844958 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-02-10 09:02:46.846260 | orchestrator | Monday 10 February 2025 09:02:46 +0000 (0:00:00.270) 0:00:15.900 ******* 2025-02-10 09:02:47.418602 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:02:47.419147 | orchestrator | ok: [testbed-manager] 2025-02-10 09:02:47.419193 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:02:47.419879 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:02:47.421651 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:02:47.423064 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:02:47.423555 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:02:47.424432 | orchestrator | 2025-02-10 09:02:47.425837 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-02-10 09:02:47.426563 | orchestrator | Monday 10 February 2025 09:02:47 +0000 (0:00:00.575) 0:00:16.476 ******* 2025-02-10 09:02:48.618370 | orchestrator | ok: [testbed-manager] 2025-02-10 09:02:48.621506 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:02:48.622578 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:02:48.624370 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:02:48.625057 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:02:48.626388 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:02:48.626915 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:02:48.628037 | orchestrator | 2025-02-10 09:02:48.628695 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-02-10 09:02:48.629464 | orchestrator | Monday 10 February 2025 09:02:48 +0000 (0:00:01.199) 0:00:17.676 ******* 2025-02-10 09:02:49.810082 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:02:49.810321 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:02:49.811961 | orchestrator | ok: [testbed-manager] 2025-02-10 09:02:49.812697 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:02:49.813215 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:02:49.813865 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:02:49.814176 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:02:49.815384 | orchestrator | 2025-02-10 09:02:49.816479 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-02-10 09:02:49.817306 | orchestrator | Monday 10 February 2025 09:02:49 +0000 (0:00:01.188) 0:00:18.864 ******* 2025-02-10 09:02:50.258855 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:02:50.260328 | orchestrator | 2025-02-10 09:02:50.261759 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-02-10 09:02:50.262874 | orchestrator | Monday 10 February 2025 09:02:50 +0000 (0:00:00.452) 0:00:19.317 ******* 2025-02-10 09:02:50.333179 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:02:51.580358 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:02:51.580816 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:02:51.581978 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:02:51.583525 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:02:51.585076 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:02:51.586007 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:02:51.586577 | orchestrator | 2025-02-10 09:02:51.587607 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-02-10 09:02:51.588258 | orchestrator | Monday 10 February 2025 09:02:51 +0000 (0:00:01.321) 0:00:20.639 ******* 2025-02-10 09:02:51.688499 | orchestrator | ok: [testbed-manager] 2025-02-10 09:02:51.722156 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:02:51.748836 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:02:51.820879 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:02:51.821051 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:02:51.821079 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:02:51.823823 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:02:51.824420 | orchestrator | 2025-02-10 09:02:51.824956 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-02-10 09:02:51.825673 | orchestrator | Monday 10 February 2025 09:02:51 +0000 (0:00:00.242) 0:00:20.882 ******* 2025-02-10 09:02:51.928727 | orchestrator | ok: [testbed-manager] 2025-02-10 09:02:51.948644 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:02:51.975645 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:02:52.061467 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:02:52.061978 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:02:52.063198 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:02:52.063964 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:02:52.064538 | orchestrator | 2025-02-10 09:02:52.065259 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-02-10 09:02:52.065894 | orchestrator | Monday 10 February 2025 09:02:52 +0000 (0:00:00.240) 0:00:21.122 ******* 2025-02-10 09:02:52.148945 | orchestrator | ok: [testbed-manager] 2025-02-10 09:02:52.175226 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:02:52.203508 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:02:52.244108 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:02:52.313657 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:02:52.313938 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:02:52.315813 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:02:52.320183 | orchestrator | 2025-02-10 09:02:52.321136 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-02-10 09:02:52.321937 | orchestrator | Monday 10 February 2025 09:02:52 +0000 (0:00:00.252) 0:00:21.375 ******* 2025-02-10 09:02:52.631100 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:02:52.632659 | orchestrator | 2025-02-10 09:02:52.633658 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-02-10 09:02:52.634945 | orchestrator | Monday 10 February 2025 09:02:52 +0000 (0:00:00.314) 0:00:21.689 ******* 2025-02-10 09:02:53.200769 | orchestrator | ok: [testbed-manager] 2025-02-10 09:02:53.201691 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:02:53.202433 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:02:53.203312 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:02:53.204197 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:02:53.204729 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:02:53.205340 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:02:53.206923 | orchestrator | 2025-02-10 09:02:53.275496 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-02-10 09:02:53.275645 | orchestrator | Monday 10 February 2025 09:02:53 +0000 (0:00:00.569) 0:00:22.258 ******* 2025-02-10 09:02:53.275711 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:02:53.310111 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:02:53.334311 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:02:53.364692 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:02:53.428460 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:02:53.429548 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:02:53.430426 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:02:53.431746 | orchestrator | 2025-02-10 09:02:53.433326 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-02-10 09:02:53.434284 | orchestrator | Monday 10 February 2025 09:02:53 +0000 (0:00:00.231) 0:00:22.489 ******* 2025-02-10 09:02:54.477774 | orchestrator | ok: [testbed-manager] 2025-02-10 09:02:54.481148 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:02:54.481461 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:02:54.482165 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:02:54.483003 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:02:54.483585 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:02:54.483960 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:02:54.484947 | orchestrator | 2025-02-10 09:02:54.485652 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-02-10 09:02:54.486116 | orchestrator | Monday 10 February 2025 09:02:54 +0000 (0:00:01.047) 0:00:23.537 ******* 2025-02-10 09:02:55.113832 | orchestrator | ok: [testbed-manager] 2025-02-10 09:02:55.114779 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:02:55.114835 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:02:55.114869 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:02:55.118116 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:02:55.118608 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:02:55.118653 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:02:55.118673 | orchestrator | 2025-02-10 09:02:55.118741 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-02-10 09:02:55.118824 | orchestrator | Monday 10 February 2025 09:02:55 +0000 (0:00:00.636) 0:00:24.173 ******* 2025-02-10 09:02:56.375040 | orchestrator | ok: [testbed-manager] 2025-02-10 09:02:56.377236 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:02:56.377327 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:02:56.377592 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:02:56.378258 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:02:56.378690 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:02:56.379838 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:02:56.380758 | orchestrator | 2025-02-10 09:02:56.381524 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-02-10 09:02:56.383768 | orchestrator | Monday 10 February 2025 09:02:56 +0000 (0:00:01.257) 0:00:25.431 ******* 2025-02-10 09:03:10.436223 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:03:10.436965 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:03:10.437156 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:03:10.437188 | orchestrator | changed: [testbed-manager] 2025-02-10 09:03:10.438414 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:03:10.440459 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:03:10.441293 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:03:10.441864 | orchestrator | 2025-02-10 09:03:10.442421 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-02-10 09:03:10.442847 | orchestrator | Monday 10 February 2025 09:03:10 +0000 (0:00:14.059) 0:00:39.491 ******* 2025-02-10 09:03:10.514668 | orchestrator | ok: [testbed-manager] 2025-02-10 09:03:10.546697 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:03:10.576094 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:03:10.613321 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:03:10.665983 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:03:10.666893 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:03:10.667766 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:03:10.668520 | orchestrator | 2025-02-10 09:03:10.669050 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-02-10 09:03:10.669451 | orchestrator | Monday 10 February 2025 09:03:10 +0000 (0:00:00.235) 0:00:39.726 ******* 2025-02-10 09:03:10.785779 | orchestrator | ok: [testbed-manager] 2025-02-10 09:03:10.816605 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:03:10.845201 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:03:10.919744 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:03:10.920774 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:03:10.921847 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:03:10.922725 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:03:10.923539 | orchestrator | 2025-02-10 09:03:10.924615 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-02-10 09:03:10.924921 | orchestrator | Monday 10 February 2025 09:03:10 +0000 (0:00:00.253) 0:00:39.980 ******* 2025-02-10 09:03:11.011552 | orchestrator | ok: [testbed-manager] 2025-02-10 09:03:11.037378 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:03:11.066206 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:03:11.091153 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:03:11.162002 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:03:11.162672 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:03:11.162720 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:03:11.163131 | orchestrator | 2025-02-10 09:03:11.164121 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-02-10 09:03:11.164703 | orchestrator | Monday 10 February 2025 09:03:11 +0000 (0:00:00.241) 0:00:40.222 ******* 2025-02-10 09:03:11.472371 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:03:11.475197 | orchestrator | 2025-02-10 09:03:11.475538 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-02-10 09:03:11.476111 | orchestrator | Monday 10 February 2025 09:03:11 +0000 (0:00:00.308) 0:00:40.531 ******* 2025-02-10 09:03:13.769776 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:03:13.770228 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:03:13.770278 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:03:13.771036 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:03:13.772787 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:03:13.773825 | orchestrator | ok: [testbed-manager] 2025-02-10 09:03:13.775842 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:03:13.776204 | orchestrator | 2025-02-10 09:03:13.776233 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-02-10 09:03:13.776253 | orchestrator | Monday 10 February 2025 09:03:13 +0000 (0:00:02.297) 0:00:42.828 ******* 2025-02-10 09:03:14.882729 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:03:14.883336 | orchestrator | changed: [testbed-manager] 2025-02-10 09:03:14.886888 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:03:14.887453 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:03:14.887630 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:03:14.888484 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:03:14.889079 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:03:14.889604 | orchestrator | 2025-02-10 09:03:14.890079 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-02-10 09:03:14.890910 | orchestrator | Monday 10 February 2025 09:03:14 +0000 (0:00:01.113) 0:00:43.942 ******* 2025-02-10 09:03:15.727989 | orchestrator | ok: [testbed-manager] 2025-02-10 09:03:15.728196 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:03:15.730481 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:03:15.730949 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:03:15.731592 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:03:15.732332 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:03:15.733351 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:03:15.733965 | orchestrator | 2025-02-10 09:03:15.734626 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-02-10 09:03:15.735082 | orchestrator | Monday 10 February 2025 09:03:15 +0000 (0:00:00.845) 0:00:44.787 ******* 2025-02-10 09:03:16.059279 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:03:16.061069 | orchestrator | 2025-02-10 09:03:16.061291 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-02-10 09:03:16.064879 | orchestrator | Monday 10 February 2025 09:03:16 +0000 (0:00:00.328) 0:00:45.115 ******* 2025-02-10 09:03:17.292136 | orchestrator | changed: [testbed-manager] 2025-02-10 09:03:17.292459 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:03:17.292498 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:03:17.293768 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:03:17.294423 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:03:17.295674 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:03:17.296316 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:03:17.297274 | orchestrator | 2025-02-10 09:03:17.298534 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-02-10 09:03:17.299883 | orchestrator | Monday 10 February 2025 09:03:17 +0000 (0:00:01.234) 0:00:46.350 ******* 2025-02-10 09:03:17.381735 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:03:17.405791 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:03:17.435251 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:03:17.463321 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:03:17.636933 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:03:17.637914 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:03:17.639741 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:03:17.641136 | orchestrator | 2025-02-10 09:03:17.641985 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-02-10 09:03:17.643337 | orchestrator | Monday 10 February 2025 09:03:17 +0000 (0:00:00.346) 0:00:46.696 ******* 2025-02-10 09:03:30.722438 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:03:30.723503 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:03:30.723542 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:03:30.723558 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:03:30.723573 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:03:30.723617 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:03:30.723640 | orchestrator | changed: [testbed-manager] 2025-02-10 09:03:30.724132 | orchestrator | 2025-02-10 09:03:30.724639 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-02-10 09:03:30.724903 | orchestrator | Monday 10 February 2025 09:03:30 +0000 (0:00:13.080) 0:00:59.777 ******* 2025-02-10 09:03:31.686187 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:03:31.686645 | orchestrator | ok: [testbed-manager] 2025-02-10 09:03:31.686691 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:03:31.687193 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:03:31.688530 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:03:31.688937 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:03:31.688970 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:03:31.689289 | orchestrator | 2025-02-10 09:03:31.689739 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-02-10 09:03:31.690274 | orchestrator | Monday 10 February 2025 09:03:31 +0000 (0:00:00.969) 0:01:00.746 ******* 2025-02-10 09:03:33.458482 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:03:33.458683 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:03:33.458714 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:03:33.462256 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:03:33.464151 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:03:33.464817 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:03:33.464874 | orchestrator | ok: [testbed-manager] 2025-02-10 09:03:33.465063 | orchestrator | 2025-02-10 09:03:33.465346 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-02-10 09:03:33.465853 | orchestrator | Monday 10 February 2025 09:03:33 +0000 (0:00:01.769) 0:01:02.516 ******* 2025-02-10 09:03:33.556169 | orchestrator | ok: [testbed-manager] 2025-02-10 09:03:33.597785 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:03:33.632278 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:03:33.669465 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:03:33.740643 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:03:33.740839 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:03:33.741535 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:03:33.742137 | orchestrator | 2025-02-10 09:03:33.744793 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-02-10 09:03:33.745699 | orchestrator | Monday 10 February 2025 09:03:33 +0000 (0:00:00.284) 0:01:02.801 ******* 2025-02-10 09:03:33.822827 | orchestrator | ok: [testbed-manager] 2025-02-10 09:03:33.851833 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:03:33.890001 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:03:33.916119 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:03:33.999440 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:03:34.000157 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:03:34.003468 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:03:34.003788 | orchestrator | 2025-02-10 09:03:34.005237 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-02-10 09:03:34.006572 | orchestrator | Monday 10 February 2025 09:03:33 +0000 (0:00:00.257) 0:01:03.058 ******* 2025-02-10 09:03:34.366943 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:03:34.367501 | orchestrator | 2025-02-10 09:03:34.370124 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-02-10 09:03:34.370344 | orchestrator | Monday 10 February 2025 09:03:34 +0000 (0:00:00.368) 0:01:03.427 ******* 2025-02-10 09:03:36.138333 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:03:36.138593 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:03:36.141969 | orchestrator | ok: [testbed-manager] 2025-02-10 09:03:36.142104 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:03:36.142124 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:03:36.146283 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:03:36.147432 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:03:36.149005 | orchestrator | 2025-02-10 09:03:36.150306 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-02-10 09:03:36.151256 | orchestrator | Monday 10 February 2025 09:03:36 +0000 (0:00:01.769) 0:01:05.197 ******* 2025-02-10 09:03:36.728966 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:03:36.729359 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:03:36.729935 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:03:36.730800 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:03:36.732343 | orchestrator | changed: [testbed-manager] 2025-02-10 09:03:36.733149 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:03:36.733169 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:03:36.735188 | orchestrator | 2025-02-10 09:03:36.737071 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-02-10 09:03:36.739332 | orchestrator | Monday 10 February 2025 09:03:36 +0000 (0:00:00.592) 0:01:05.789 ******* 2025-02-10 09:03:36.799031 | orchestrator | ok: [testbed-manager] 2025-02-10 09:03:36.854897 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:03:36.894646 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:03:36.923095 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:03:36.987497 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:03:36.988470 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:03:36.988529 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:03:36.988930 | orchestrator | 2025-02-10 09:03:36.990824 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-02-10 09:03:36.992040 | orchestrator | Monday 10 February 2025 09:03:36 +0000 (0:00:00.258) 0:01:06.048 ******* 2025-02-10 09:03:38.303804 | orchestrator | ok: [testbed-manager] 2025-02-10 09:03:38.304295 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:03:38.304345 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:03:38.305196 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:03:38.307111 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:03:38.307691 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:03:38.308094 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:03:38.308782 | orchestrator | 2025-02-10 09:03:38.309517 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-02-10 09:03:38.310149 | orchestrator | Monday 10 February 2025 09:03:38 +0000 (0:00:01.313) 0:01:07.362 ******* 2025-02-10 09:03:40.013115 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:03:40.013571 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:03:40.014787 | orchestrator | changed: [testbed-manager] 2025-02-10 09:03:40.016125 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:03:40.016835 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:03:40.016889 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:03:40.019858 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:03:40.021277 | orchestrator | 2025-02-10 09:03:40.021425 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-02-10 09:03:40.021451 | orchestrator | Monday 10 February 2025 09:03:40 +0000 (0:00:01.709) 0:01:09.071 ******* 2025-02-10 09:03:42.669100 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:03:42.669834 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:03:42.673520 | orchestrator | ok: [testbed-manager] 2025-02-10 09:03:42.674526 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:03:42.675204 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:03:42.676756 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:03:42.677657 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:03:42.677699 | orchestrator | 2025-02-10 09:03:42.678376 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-02-10 09:03:42.679344 | orchestrator | Monday 10 February 2025 09:03:42 +0000 (0:00:02.655) 0:01:11.727 ******* 2025-02-10 09:04:16.526694 | orchestrator | ok: [testbed-manager] 2025-02-10 09:04:16.527066 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:04:16.527112 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:04:16.527128 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:04:16.527142 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:04:16.527164 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:04:16.527354 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:04:16.527734 | orchestrator | 2025-02-10 09:04:16.528067 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-02-10 09:04:16.528644 | orchestrator | Monday 10 February 2025 09:04:16 +0000 (0:00:33.853) 0:01:45.581 ******* 2025-02-10 09:05:32.513172 | orchestrator | changed: [testbed-manager] 2025-02-10 09:05:32.513570 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:05:32.513908 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:05:32.513951 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:05:32.516179 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:05:32.516888 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:05:32.517431 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:05:32.518344 | orchestrator | 2025-02-10 09:05:32.518959 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-02-10 09:05:32.519667 | orchestrator | Monday 10 February 2025 09:05:32 +0000 (0:01:15.989) 0:03:01.570 ******* 2025-02-10 09:05:34.307018 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:05:34.307170 | orchestrator | ok: [testbed-manager] 2025-02-10 09:05:34.307686 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:05:34.309122 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:05:34.309573 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:05:34.310611 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:05:34.311577 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:05:34.312646 | orchestrator | 2025-02-10 09:05:34.313937 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-02-10 09:05:34.314741 | orchestrator | Monday 10 February 2025 09:05:34 +0000 (0:00:01.793) 0:03:03.364 ******* 2025-02-10 09:05:40.704288 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:05:40.706512 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:05:40.706574 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:05:40.706601 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:05:40.707107 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:05:40.707153 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:05:40.708698 | orchestrator | changed: [testbed-manager] 2025-02-10 09:05:40.708980 | orchestrator | 2025-02-10 09:05:40.709815 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-02-10 09:05:40.711239 | orchestrator | Monday 10 February 2025 09:05:40 +0000 (0:00:06.397) 0:03:09.761 ******* 2025-02-10 09:05:41.082463 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-02-10 09:05:41.084016 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-02-10 09:05:41.084696 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-02-10 09:05:41.085750 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-02-10 09:05:41.086652 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-02-10 09:05:41.087048 | orchestrator | 2025-02-10 09:05:41.087800 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-02-10 09:05:41.088184 | orchestrator | Monday 10 February 2025 09:05:41 +0000 (0:00:00.381) 0:03:10.143 ******* 2025-02-10 09:05:41.143856 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-02-10 09:05:41.165757 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:05:41.255442 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-02-10 09:05:41.746903 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-02-10 09:05:41.747068 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:05:41.749559 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:05:41.749770 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-02-10 09:05:41.751662 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:05:41.753111 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-02-10 09:05:41.753516 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-02-10 09:05:41.754548 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-02-10 09:05:41.755194 | orchestrator | 2025-02-10 09:05:41.755814 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-02-10 09:05:41.756269 | orchestrator | Monday 10 February 2025 09:05:41 +0000 (0:00:00.662) 0:03:10.805 ******* 2025-02-10 09:05:41.828353 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-02-10 09:05:41.863588 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-02-10 09:05:41.863669 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-02-10 09:05:41.863686 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-02-10 09:05:41.863701 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-02-10 09:05:41.863716 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-02-10 09:05:41.863729 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-02-10 09:05:41.863745 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-02-10 09:05:41.863760 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-02-10 09:05:41.863774 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-02-10 09:05:41.863801 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:05:41.962291 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-02-10 09:05:41.962456 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-02-10 09:05:41.962765 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-02-10 09:05:41.963755 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-02-10 09:05:41.965731 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-02-10 09:05:41.968773 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-02-10 09:05:41.968858 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-02-10 09:05:41.969548 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-02-10 09:05:41.969970 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-02-10 09:05:41.970432 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-02-10 09:05:41.970930 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-02-10 09:05:47.757489 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-02-10 09:05:47.757751 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:05:47.757795 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-02-10 09:05:47.762225 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-02-10 09:05:47.763595 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-02-10 09:05:47.765154 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-02-10 09:05:47.765566 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-02-10 09:05:47.765593 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-02-10 09:05:47.766270 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-02-10 09:05:47.767409 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-02-10 09:05:47.767722 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:05:47.768101 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-02-10 09:05:47.769089 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-02-10 09:05:47.770253 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-02-10 09:05:47.770336 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-02-10 09:05:47.770388 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-02-10 09:05:47.770600 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-02-10 09:05:47.771130 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-02-10 09:05:47.771573 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-02-10 09:05:47.771896 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-02-10 09:05:47.772453 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-02-10 09:05:47.772637 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:05:47.774125 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-02-10 09:05:47.774508 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-02-10 09:05:47.775139 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-02-10 09:05:47.776047 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-02-10 09:05:47.776895 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-02-10 09:05:47.777155 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-02-10 09:05:47.777609 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-02-10 09:05:47.777758 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-02-10 09:05:47.778994 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-02-10 09:05:47.779123 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-02-10 09:05:47.779149 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-02-10 09:05:47.779869 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-02-10 09:05:47.780093 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-02-10 09:05:47.780471 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-02-10 09:05:47.780919 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-02-10 09:05:47.781053 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-02-10 09:05:47.781882 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-02-10 09:05:47.782130 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-02-10 09:05:47.783602 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-02-10 09:05:47.783715 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-02-10 09:05:47.783753 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-02-10 09:05:47.784039 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-02-10 09:05:47.786084 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-02-10 09:05:47.786222 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-02-10 09:05:47.786245 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-02-10 09:05:47.786266 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-02-10 09:05:47.786350 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-02-10 09:05:47.786424 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-02-10 09:05:47.786964 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-02-10 09:05:47.787225 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-02-10 09:05:47.787918 | orchestrator | 2025-02-10 09:05:47.787982 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-02-10 09:05:47.788076 | orchestrator | Monday 10 February 2025 09:05:47 +0000 (0:00:06.010) 0:03:16.815 ******* 2025-02-10 09:05:48.384453 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-02-10 09:05:48.384953 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-02-10 09:05:48.385538 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-02-10 09:05:48.386099 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-02-10 09:05:48.386447 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-02-10 09:05:48.386930 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-02-10 09:05:48.387704 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-02-10 09:05:48.388271 | orchestrator | 2025-02-10 09:05:48.389514 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-02-10 09:05:48.441219 | orchestrator | Monday 10 February 2025 09:05:48 +0000 (0:00:00.629) 0:03:17.444 ******* 2025-02-10 09:05:48.441402 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-02-10 09:05:48.470716 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:05:48.470869 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-02-10 09:05:48.515334 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:05:48.516023 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-02-10 09:05:48.516077 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-02-10 09:05:48.544306 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:05:48.570289 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:05:49.983707 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-02-10 09:05:49.984028 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-02-10 09:05:49.985053 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-02-10 09:05:49.985432 | orchestrator | 2025-02-10 09:05:49.985465 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-02-10 09:05:49.986138 | orchestrator | Monday 10 February 2025 09:05:49 +0000 (0:00:01.598) 0:03:19.043 ******* 2025-02-10 09:05:50.052264 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-02-10 09:05:50.081097 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-02-10 09:05:50.081272 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:05:50.112953 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-02-10 09:05:50.113462 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:05:50.114090 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-02-10 09:05:50.139453 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:05:50.166222 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:05:51.727558 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-02-10 09:05:51.727764 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-02-10 09:05:51.729129 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-02-10 09:05:51.729847 | orchestrator | 2025-02-10 09:05:51.730964 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-02-10 09:05:51.731562 | orchestrator | Monday 10 February 2025 09:05:51 +0000 (0:00:01.741) 0:03:20.784 ******* 2025-02-10 09:05:51.810905 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:05:51.835921 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:05:51.860860 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:05:51.886745 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:05:52.047978 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:05:52.051891 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:05:52.054346 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:05:52.054543 | orchestrator | 2025-02-10 09:05:52.054585 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-02-10 09:05:52.055349 | orchestrator | Monday 10 February 2025 09:05:52 +0000 (0:00:00.323) 0:03:21.108 ******* 2025-02-10 09:05:57.649875 | orchestrator | ok: [testbed-manager] 2025-02-10 09:05:57.650247 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:05:57.650340 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:05:57.650561 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:05:57.654132 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:05:57.658230 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:05:57.661202 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:05:57.661337 | orchestrator | 2025-02-10 09:05:57.661864 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-02-10 09:05:57.662420 | orchestrator | Monday 10 February 2025 09:05:57 +0000 (0:00:05.600) 0:03:26.708 ******* 2025-02-10 09:05:57.717958 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-02-10 09:05:57.759135 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:05:57.759594 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-02-10 09:05:57.793608 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:05:57.829960 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-02-10 09:05:57.830143 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:05:57.865605 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-02-10 09:05:57.865753 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:05:57.938828 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-02-10 09:05:57.939010 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:05:57.939151 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-02-10 09:05:57.939349 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:05:57.940010 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-02-10 09:05:57.940468 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:05:57.940499 | orchestrator | 2025-02-10 09:05:57.940913 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-02-10 09:05:57.941011 | orchestrator | Monday 10 February 2025 09:05:57 +0000 (0:00:00.293) 0:03:27.001 ******* 2025-02-10 09:05:59.183817 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-02-10 09:05:59.184516 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-02-10 09:05:59.186302 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-02-10 09:05:59.187927 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-02-10 09:05:59.188550 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-02-10 09:05:59.190092 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-02-10 09:05:59.191045 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-02-10 09:05:59.191906 | orchestrator | 2025-02-10 09:05:59.192806 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-02-10 09:05:59.193220 | orchestrator | Monday 10 February 2025 09:05:59 +0000 (0:00:01.239) 0:03:28.241 ******* 2025-02-10 09:05:59.710558 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:05:59.710795 | orchestrator | 2025-02-10 09:05:59.713194 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-02-10 09:06:00.936708 | orchestrator | Monday 10 February 2025 09:05:59 +0000 (0:00:00.529) 0:03:28.770 ******* 2025-02-10 09:06:00.936910 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:06:00.936989 | orchestrator | ok: [testbed-manager] 2025-02-10 09:06:00.937917 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:06:00.939424 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:06:00.940423 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:06:00.941475 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:06:00.942846 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:06:00.943466 | orchestrator | 2025-02-10 09:06:00.944221 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-02-10 09:06:00.945023 | orchestrator | Monday 10 February 2025 09:06:00 +0000 (0:00:01.225) 0:03:29.995 ******* 2025-02-10 09:06:01.566724 | orchestrator | ok: [testbed-manager] 2025-02-10 09:06:01.566972 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:06:01.567006 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:06:01.568336 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:06:01.568649 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:06:01.569556 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:06:01.570081 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:06:01.570731 | orchestrator | 2025-02-10 09:06:01.571484 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-02-10 09:06:01.572021 | orchestrator | Monday 10 February 2025 09:06:01 +0000 (0:00:00.628) 0:03:30.624 ******* 2025-02-10 09:06:02.179871 | orchestrator | changed: [testbed-manager] 2025-02-10 09:06:02.180024 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:06:02.180947 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:06:02.182051 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:06:02.186095 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:06:02.186337 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:06:02.187032 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:06:02.188238 | orchestrator | 2025-02-10 09:06:02.188549 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-02-10 09:06:02.189319 | orchestrator | Monday 10 February 2025 09:06:02 +0000 (0:00:00.615) 0:03:31.240 ******* 2025-02-10 09:06:02.826607 | orchestrator | ok: [testbed-manager] 2025-02-10 09:06:02.828315 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:06:02.828488 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:06:02.829159 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:06:02.830296 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:06:02.831074 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:06:02.831587 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:06:02.832431 | orchestrator | 2025-02-10 09:06:02.832922 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-02-10 09:06:02.833521 | orchestrator | Monday 10 February 2025 09:06:02 +0000 (0:00:00.644) 0:03:31.884 ******* 2025-02-10 09:06:03.861625 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1739176555.777675, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:06:03.862620 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1739178089.756173, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:06:03.863941 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1739178089.777991, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:06:03.864711 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1739178089.758074, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:06:03.865453 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1739178089.7265244, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:06:03.868963 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1739178089.7528648, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:06:03.869400 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1739178089.7876616, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:06:03.869476 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1739176578.7777057, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:06:03.869494 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1739176485.3083112, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:06:03.869515 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1739176487.796695, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:06:03.871339 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1739176486.4521396, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:06:03.872582 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1739176480.54954, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:06:03.872764 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1739176499.276991, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:06:03.873266 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1739176481.29265, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:06:03.873896 | orchestrator | 2025-02-10 09:06:03.874303 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-02-10 09:06:03.874747 | orchestrator | Monday 10 February 2025 09:06:03 +0000 (0:00:01.034) 0:03:32.919 ******* 2025-02-10 09:06:05.029395 | orchestrator | changed: [testbed-manager] 2025-02-10 09:06:05.030137 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:06:05.031051 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:06:05.032141 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:06:05.032913 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:06:05.033569 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:06:05.034632 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:06:05.035603 | orchestrator | 2025-02-10 09:06:05.036323 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-02-10 09:06:05.036956 | orchestrator | Monday 10 February 2025 09:06:05 +0000 (0:00:01.168) 0:03:34.088 ******* 2025-02-10 09:06:06.277517 | orchestrator | changed: [testbed-manager] 2025-02-10 09:06:06.278010 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:06:06.279591 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:06:06.280510 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:06:06.281180 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:06:06.281971 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:06:06.282980 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:06:06.283769 | orchestrator | 2025-02-10 09:06:06.284533 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-02-10 09:06:06.285245 | orchestrator | Monday 10 February 2025 09:06:06 +0000 (0:00:01.248) 0:03:35.337 ******* 2025-02-10 09:06:06.377757 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:06:06.410877 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:06:06.448496 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:06:06.478541 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:06:06.538309 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:06:06.538526 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:06:06.539956 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:06:06.540807 | orchestrator | 2025-02-10 09:06:06.541419 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-02-10 09:06:06.542131 | orchestrator | Monday 10 February 2025 09:06:06 +0000 (0:00:00.259) 0:03:35.597 ******* 2025-02-10 09:06:07.316540 | orchestrator | ok: [testbed-manager] 2025-02-10 09:06:07.317258 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:06:07.317299 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:06:07.319417 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:06:07.320722 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:06:07.321857 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:06:07.323057 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:06:07.324250 | orchestrator | 2025-02-10 09:06:07.324326 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-02-10 09:06:07.325347 | orchestrator | Monday 10 February 2025 09:06:07 +0000 (0:00:00.778) 0:03:36.375 ******* 2025-02-10 09:06:07.720536 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:06:07.722094 | orchestrator | 2025-02-10 09:06:07.722815 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-02-10 09:06:07.724256 | orchestrator | Monday 10 February 2025 09:06:07 +0000 (0:00:00.403) 0:03:36.779 ******* 2025-02-10 09:06:15.694444 | orchestrator | ok: [testbed-manager] 2025-02-10 09:06:15.694977 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:06:15.695036 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:06:15.695881 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:06:15.697405 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:06:15.698079 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:06:15.698316 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:06:15.699741 | orchestrator | 2025-02-10 09:06:15.700151 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-02-10 09:06:15.701108 | orchestrator | Monday 10 February 2025 09:06:15 +0000 (0:00:07.973) 0:03:44.753 ******* 2025-02-10 09:06:16.944476 | orchestrator | ok: [testbed-manager] 2025-02-10 09:06:16.946187 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:06:16.946231 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:06:16.946247 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:06:16.946261 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:06:16.946286 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:06:16.946538 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:06:16.946805 | orchestrator | 2025-02-10 09:06:16.946845 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-02-10 09:06:16.947205 | orchestrator | Monday 10 February 2025 09:06:16 +0000 (0:00:01.250) 0:03:46.003 ******* 2025-02-10 09:06:17.950450 | orchestrator | ok: [testbed-manager] 2025-02-10 09:06:17.953584 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:06:17.953628 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:06:17.953641 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:06:17.954273 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:06:17.954942 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:06:17.955845 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:06:17.956682 | orchestrator | 2025-02-10 09:06:17.958930 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-02-10 09:06:18.548933 | orchestrator | Monday 10 February 2025 09:06:17 +0000 (0:00:01.005) 0:03:47.009 ******* 2025-02-10 09:06:18.549137 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:06:18.549836 | orchestrator | 2025-02-10 09:06:18.553122 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-02-10 09:06:27.288467 | orchestrator | Monday 10 February 2025 09:06:18 +0000 (0:00:00.600) 0:03:47.609 ******* 2025-02-10 09:06:27.288629 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:06:27.290679 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:06:27.290784 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:06:27.290806 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:06:27.291076 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:06:27.294277 | orchestrator | changed: [testbed-manager] 2025-02-10 09:06:27.294630 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:06:27.297427 | orchestrator | 2025-02-10 09:06:27.297828 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-02-10 09:06:27.301231 | orchestrator | Monday 10 February 2025 09:06:27 +0000 (0:00:08.738) 0:03:56.347 ******* 2025-02-10 09:06:27.950767 | orchestrator | changed: [testbed-manager] 2025-02-10 09:06:27.952614 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:06:27.952661 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:06:27.953032 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:06:27.953718 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:06:27.954604 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:06:27.954833 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:06:27.955457 | orchestrator | 2025-02-10 09:06:27.955976 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-02-10 09:06:27.956491 | orchestrator | Monday 10 February 2025 09:06:27 +0000 (0:00:00.665) 0:03:57.013 ******* 2025-02-10 09:06:29.246951 | orchestrator | changed: [testbed-manager] 2025-02-10 09:06:29.247179 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:06:29.247212 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:06:29.247482 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:06:29.247521 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:06:29.249378 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:06:29.249885 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:06:29.250334 | orchestrator | 2025-02-10 09:06:29.251243 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-02-10 09:06:29.253045 | orchestrator | Monday 10 February 2025 09:06:29 +0000 (0:00:01.293) 0:03:58.306 ******* 2025-02-10 09:06:30.297392 | orchestrator | changed: [testbed-manager] 2025-02-10 09:06:30.297752 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:06:30.301343 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:06:30.301504 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:06:30.301526 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:06:30.301540 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:06:30.301553 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:06:30.301566 | orchestrator | 2025-02-10 09:06:30.301580 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-02-10 09:06:30.301599 | orchestrator | Monday 10 February 2025 09:06:30 +0000 (0:00:01.050) 0:03:59.356 ******* 2025-02-10 09:06:30.425644 | orchestrator | ok: [testbed-manager] 2025-02-10 09:06:30.464545 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:06:30.498386 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:06:30.535685 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:06:30.606707 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:06:30.607194 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:06:30.608008 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:06:30.610212 | orchestrator | 2025-02-10 09:06:30.709027 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-02-10 09:06:30.709170 | orchestrator | Monday 10 February 2025 09:06:30 +0000 (0:00:00.310) 0:03:59.667 ******* 2025-02-10 09:06:30.709211 | orchestrator | ok: [testbed-manager] 2025-02-10 09:06:30.745085 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:06:30.785457 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:06:30.816558 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:06:30.921433 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:06:30.921939 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:06:30.922516 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:06:30.923695 | orchestrator | 2025-02-10 09:06:30.924820 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-02-10 09:06:30.926825 | orchestrator | Monday 10 February 2025 09:06:30 +0000 (0:00:00.314) 0:03:59.982 ******* 2025-02-10 09:06:31.030611 | orchestrator | ok: [testbed-manager] 2025-02-10 09:06:31.069963 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:06:31.104322 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:06:31.136638 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:06:31.217532 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:06:31.217896 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:06:31.219026 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:06:31.220175 | orchestrator | 2025-02-10 09:06:31.221174 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-02-10 09:06:31.222485 | orchestrator | Monday 10 February 2025 09:06:31 +0000 (0:00:00.297) 0:04:00.279 ******* 2025-02-10 09:06:36.929652 | orchestrator | ok: [testbed-manager] 2025-02-10 09:06:36.930629 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:06:36.930658 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:06:36.931097 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:06:36.932221 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:06:36.933086 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:06:36.934437 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:06:36.935081 | orchestrator | 2025-02-10 09:06:36.936172 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-02-10 09:06:36.937009 | orchestrator | Monday 10 February 2025 09:06:36 +0000 (0:00:05.709) 0:04:05.989 ******* 2025-02-10 09:06:37.354094 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:06:37.354330 | orchestrator | 2025-02-10 09:06:37.354448 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-02-10 09:06:37.430450 | orchestrator | Monday 10 February 2025 09:06:37 +0000 (0:00:00.424) 0:04:06.413 ******* 2025-02-10 09:06:37.430597 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-02-10 09:06:37.475997 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-02-10 09:06:37.476151 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-02-10 09:06:37.476813 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:06:37.479251 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-02-10 09:06:37.480431 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-02-10 09:06:37.481413 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-02-10 09:06:37.510876 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:06:37.552829 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-02-10 09:06:37.553825 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:06:37.554543 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-02-10 09:06:37.555398 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-02-10 09:06:37.589002 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-02-10 09:06:37.589588 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:06:37.590473 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-02-10 09:06:37.665310 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-02-10 09:06:37.665539 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:06:37.666863 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:06:37.668305 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-02-10 09:06:37.668944 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-02-10 09:06:37.670479 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:06:37.671607 | orchestrator | 2025-02-10 09:06:37.671946 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-02-10 09:06:37.672942 | orchestrator | Monday 10 February 2025 09:06:37 +0000 (0:00:00.311) 0:04:06.725 ******* 2025-02-10 09:06:38.074546 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:06:38.074836 | orchestrator | 2025-02-10 09:06:38.075587 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-02-10 09:06:38.077080 | orchestrator | Monday 10 February 2025 09:06:38 +0000 (0:00:00.410) 0:04:07.135 ******* 2025-02-10 09:06:38.149628 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-02-10 09:06:38.187915 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:06:38.189143 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-02-10 09:06:38.229338 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:06:38.229564 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-02-10 09:06:38.270646 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:06:38.271083 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-02-10 09:06:38.271967 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-02-10 09:06:38.304224 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:06:38.377152 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-02-10 09:06:38.377955 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:06:38.379163 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:06:38.382055 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-02-10 09:06:38.382957 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:06:38.382986 | orchestrator | 2025-02-10 09:06:38.382995 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-02-10 09:06:38.383010 | orchestrator | Monday 10 February 2025 09:06:38 +0000 (0:00:00.302) 0:04:07.438 ******* 2025-02-10 09:06:38.910599 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:06:38.911519 | orchestrator | 2025-02-10 09:06:38.914142 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-02-10 09:07:11.985865 | orchestrator | Monday 10 February 2025 09:06:38 +0000 (0:00:00.531) 0:04:07.970 ******* 2025-02-10 09:07:11.986103 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:07:11.987111 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:07:11.987145 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:07:11.987163 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:07:11.987202 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:07:11.987678 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:07:11.988623 | orchestrator | changed: [testbed-manager] 2025-02-10 09:07:11.990063 | orchestrator | 2025-02-10 09:07:11.991089 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-02-10 09:07:11.991789 | orchestrator | Monday 10 February 2025 09:07:11 +0000 (0:00:33.071) 0:04:41.042 ******* 2025-02-10 09:07:20.222687 | orchestrator | changed: [testbed-manager] 2025-02-10 09:07:20.224184 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:07:20.224234 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:07:20.227835 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:07:20.229004 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:07:20.230578 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:07:20.231728 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:07:20.232675 | orchestrator | 2025-02-10 09:07:20.233855 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-02-10 09:07:20.234347 | orchestrator | Monday 10 February 2025 09:07:20 +0000 (0:00:08.237) 0:04:49.280 ******* 2025-02-10 09:07:28.164532 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:07:28.166644 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:07:28.166701 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:07:28.170381 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:07:28.171257 | orchestrator | changed: [testbed-manager] 2025-02-10 09:07:28.172161 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:07:28.173163 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:07:28.174121 | orchestrator | 2025-02-10 09:07:28.176857 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-02-10 09:07:28.178257 | orchestrator | Monday 10 February 2025 09:07:28 +0000 (0:00:07.939) 0:04:57.219 ******* 2025-02-10 09:07:29.938989 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:07:29.939914 | orchestrator | ok: [testbed-manager] 2025-02-10 09:07:29.940647 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:07:29.941768 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:07:29.942482 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:07:29.943766 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:07:29.944210 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:07:29.944264 | orchestrator | 2025-02-10 09:07:29.944671 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-02-10 09:07:29.945073 | orchestrator | Monday 10 February 2025 09:07:29 +0000 (0:00:01.776) 0:04:58.996 ******* 2025-02-10 09:07:36.062986 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:07:36.063286 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:07:36.063934 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:07:36.064511 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:07:36.066201 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:07:36.066721 | orchestrator | changed: [testbed-manager] 2025-02-10 09:07:36.066756 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:07:36.067748 | orchestrator | 2025-02-10 09:07:36.067901 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-02-10 09:07:36.068519 | orchestrator | Monday 10 February 2025 09:07:36 +0000 (0:00:06.127) 0:05:05.124 ******* 2025-02-10 09:07:36.547163 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:07:36.547526 | orchestrator | 2025-02-10 09:07:36.547577 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-02-10 09:07:36.548600 | orchestrator | Monday 10 February 2025 09:07:36 +0000 (0:00:00.481) 0:05:05.605 ******* 2025-02-10 09:07:37.374768 | orchestrator | changed: [testbed-manager] 2025-02-10 09:07:37.376758 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:07:37.379298 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:07:37.379746 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:07:37.379783 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:07:37.379798 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:07:37.379820 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:07:37.380751 | orchestrator | 2025-02-10 09:07:37.381309 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-02-10 09:07:37.381343 | orchestrator | Monday 10 February 2025 09:07:37 +0000 (0:00:00.828) 0:05:06.434 ******* 2025-02-10 09:07:39.083477 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:07:39.084264 | orchestrator | ok: [testbed-manager] 2025-02-10 09:07:39.084301 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:07:39.084508 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:07:39.085922 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:07:39.086097 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:07:39.086905 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:07:39.087730 | orchestrator | 2025-02-10 09:07:39.088820 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-02-10 09:07:39.089123 | orchestrator | Monday 10 February 2025 09:07:39 +0000 (0:00:01.709) 0:05:08.143 ******* 2025-02-10 09:07:39.858195 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:07:39.859143 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:07:39.863006 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:07:39.864177 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:07:39.865763 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:07:39.865996 | orchestrator | changed: [testbed-manager] 2025-02-10 09:07:39.867030 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:07:39.868296 | orchestrator | 2025-02-10 09:07:39.869101 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-02-10 09:07:39.869993 | orchestrator | Monday 10 February 2025 09:07:39 +0000 (0:00:00.774) 0:05:08.918 ******* 2025-02-10 09:07:39.921244 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:07:39.953545 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:07:39.986309 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:07:40.017107 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:07:40.115199 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:07:40.116566 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:07:40.117829 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:07:40.119322 | orchestrator | 2025-02-10 09:07:40.120576 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-02-10 09:07:40.121222 | orchestrator | Monday 10 February 2025 09:07:40 +0000 (0:00:00.256) 0:05:09.175 ******* 2025-02-10 09:07:40.202112 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:07:40.229311 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:07:40.265618 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:07:40.300856 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:07:40.524477 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:07:40.524935 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:07:40.525542 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:07:40.525977 | orchestrator | 2025-02-10 09:07:40.526751 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-02-10 09:07:40.527303 | orchestrator | Monday 10 February 2025 09:07:40 +0000 (0:00:00.409) 0:05:09.585 ******* 2025-02-10 09:07:40.638960 | orchestrator | ok: [testbed-manager] 2025-02-10 09:07:40.675233 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:07:40.730395 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:07:40.781872 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:07:40.871836 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:07:40.874260 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:07:40.876336 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:07:40.878188 | orchestrator | 2025-02-10 09:07:40.878346 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-02-10 09:07:40.879276 | orchestrator | Monday 10 February 2025 09:07:40 +0000 (0:00:00.347) 0:05:09.933 ******* 2025-02-10 09:07:40.990790 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:07:41.028400 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:07:41.061200 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:07:41.097670 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:07:41.165264 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:07:41.165797 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:07:41.166058 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:07:41.166595 | orchestrator | 2025-02-10 09:07:41.171320 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-02-10 09:07:41.264817 | orchestrator | Monday 10 February 2025 09:07:41 +0000 (0:00:00.293) 0:05:10.226 ******* 2025-02-10 09:07:41.264980 | orchestrator | ok: [testbed-manager] 2025-02-10 09:07:41.298726 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:07:41.338173 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:07:41.365330 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:07:41.442929 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:07:41.443812 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:07:41.445561 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:07:41.447583 | orchestrator | 2025-02-10 09:07:41.447676 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-02-10 09:07:41.448557 | orchestrator | Monday 10 February 2025 09:07:41 +0000 (0:00:00.277) 0:05:10.504 ******* 2025-02-10 09:07:41.641989 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:07:41.680573 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:07:41.724258 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:07:41.750427 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:07:41.794659 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:07:41.871826 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:07:41.873105 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:07:41.873956 | orchestrator | 2025-02-10 09:07:41.874912 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-02-10 09:07:41.876016 | orchestrator | Monday 10 February 2025 09:07:41 +0000 (0:00:00.427) 0:05:10.931 ******* 2025-02-10 09:07:41.980680 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:07:42.014641 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:07:42.046623 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:07:42.078276 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:07:42.169511 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:07:42.169718 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:07:42.172497 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:07:42.172570 | orchestrator | 2025-02-10 09:07:42.173637 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-02-10 09:07:42.174108 | orchestrator | Monday 10 February 2025 09:07:42 +0000 (0:00:00.297) 0:05:11.228 ******* 2025-02-10 09:07:42.626566 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:07:42.627880 | orchestrator | 2025-02-10 09:07:42.628827 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-02-10 09:07:42.629921 | orchestrator | Monday 10 February 2025 09:07:42 +0000 (0:00:00.456) 0:05:11.684 ******* 2025-02-10 09:07:43.466273 | orchestrator | ok: [testbed-manager] 2025-02-10 09:07:43.466558 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:07:43.466988 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:07:43.467820 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:07:43.468619 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:07:43.469693 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:07:43.469776 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:07:43.470640 | orchestrator | 2025-02-10 09:07:43.471484 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-02-10 09:07:43.472138 | orchestrator | Monday 10 February 2025 09:07:43 +0000 (0:00:00.841) 0:05:12.525 ******* 2025-02-10 09:07:45.861548 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:07:45.861842 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:07:45.862947 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:07:45.863829 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:07:45.864463 | orchestrator | ok: [testbed-manager] 2025-02-10 09:07:45.865154 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:07:45.865751 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:07:45.867187 | orchestrator | 2025-02-10 09:07:45.921542 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-02-10 09:07:45.921693 | orchestrator | Monday 10 February 2025 09:07:45 +0000 (0:00:02.396) 0:05:14.922 ******* 2025-02-10 09:07:45.921744 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-02-10 09:07:46.001717 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-02-10 09:07:46.001927 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-02-10 09:07:46.002115 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-02-10 09:07:46.002642 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-02-10 09:07:46.003011 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-02-10 09:07:46.066622 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:07:46.066912 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-02-10 09:07:46.138908 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-02-10 09:07:46.147172 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:07:46.147323 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-02-10 09:07:46.150256 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-02-10 09:07:46.150606 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-02-10 09:07:46.212441 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:07:46.213946 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-02-10 09:07:46.214191 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-02-10 09:07:46.214570 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-02-10 09:07:46.214867 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-02-10 09:07:46.403601 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:07:46.403817 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-02-10 09:07:46.404386 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-02-10 09:07:46.404428 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-02-10 09:07:46.536955 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:07:46.540253 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:07:46.541982 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-02-10 09:07:46.542108 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-02-10 09:07:46.542129 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-02-10 09:07:46.542155 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:07:46.542997 | orchestrator | 2025-02-10 09:07:46.545417 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-02-10 09:07:46.545901 | orchestrator | Monday 10 February 2025 09:07:46 +0000 (0:00:00.674) 0:05:15.597 ******* 2025-02-10 09:07:52.198902 | orchestrator | ok: [testbed-manager] 2025-02-10 09:07:52.199611 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:07:52.199669 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:07:52.201651 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:07:52.202585 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:07:52.203445 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:07:52.203879 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:07:52.204788 | orchestrator | 2025-02-10 09:07:52.205531 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-02-10 09:07:52.207288 | orchestrator | Monday 10 February 2025 09:07:52 +0000 (0:00:05.659) 0:05:21.256 ******* 2025-02-10 09:07:53.232753 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:07:53.233091 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:07:53.236259 | orchestrator | ok: [testbed-manager] 2025-02-10 09:07:53.237503 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:07:53.237540 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:07:53.237555 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:07:53.237570 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:07:53.237585 | orchestrator | 2025-02-10 09:07:53.237600 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-02-10 09:07:53.237623 | orchestrator | Monday 10 February 2025 09:07:53 +0000 (0:00:01.032) 0:05:22.289 ******* 2025-02-10 09:08:00.915868 | orchestrator | ok: [testbed-manager] 2025-02-10 09:08:00.916559 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:08:00.918277 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:08:00.918495 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:08:00.919431 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:08:00.920534 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:08:00.921017 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:08:00.922828 | orchestrator | 2025-02-10 09:08:00.923568 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-02-10 09:08:00.923614 | orchestrator | Monday 10 February 2025 09:08:00 +0000 (0:00:07.684) 0:05:29.973 ******* 2025-02-10 09:08:04.150677 | orchestrator | changed: [testbed-manager] 2025-02-10 09:08:04.151215 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:08:04.152843 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:08:04.153775 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:08:04.155700 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:08:04.156066 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:08:04.158510 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:08:04.158777 | orchestrator | 2025-02-10 09:08:04.159380 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-02-10 09:08:04.159731 | orchestrator | Monday 10 February 2025 09:08:04 +0000 (0:00:03.234) 0:05:33.208 ******* 2025-02-10 09:08:05.688846 | orchestrator | ok: [testbed-manager] 2025-02-10 09:08:05.689377 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:08:05.689674 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:08:05.690455 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:08:05.690578 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:08:05.691887 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:08:05.692065 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:08:05.692101 | orchestrator | 2025-02-10 09:08:05.692134 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-02-10 09:08:05.692427 | orchestrator | Monday 10 February 2025 09:08:05 +0000 (0:00:01.540) 0:05:34.748 ******* 2025-02-10 09:08:07.024178 | orchestrator | ok: [testbed-manager] 2025-02-10 09:08:07.025031 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:08:07.025073 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:08:07.025089 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:08:07.025105 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:08:07.025127 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:08:07.026189 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:08:07.026993 | orchestrator | 2025-02-10 09:08:07.027994 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-02-10 09:08:07.029211 | orchestrator | Monday 10 February 2025 09:08:07 +0000 (0:00:01.328) 0:05:36.076 ******* 2025-02-10 09:08:07.229683 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:08:07.293322 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:08:07.365957 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:08:07.429641 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:08:07.695628 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:08:07.696180 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:08:07.697884 | orchestrator | changed: [testbed-manager] 2025-02-10 09:08:07.699127 | orchestrator | 2025-02-10 09:08:07.699837 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-02-10 09:08:07.700743 | orchestrator | Monday 10 February 2025 09:08:07 +0000 (0:00:00.676) 0:05:36.753 ******* 2025-02-10 09:08:17.906718 | orchestrator | ok: [testbed-manager] 2025-02-10 09:08:17.907065 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:08:17.907112 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:08:17.907128 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:08:17.907152 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:08:17.907456 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:08:17.910432 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:08:17.910687 | orchestrator | 2025-02-10 09:08:17.911436 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-02-10 09:08:17.911788 | orchestrator | Monday 10 February 2025 09:08:17 +0000 (0:00:10.211) 0:05:46.964 ******* 2025-02-10 09:08:18.834330 | orchestrator | changed: [testbed-manager] 2025-02-10 09:08:18.834681 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:08:18.834717 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:08:18.835548 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:08:18.836138 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:08:18.836463 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:08:18.837123 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:08:18.838144 | orchestrator | 2025-02-10 09:08:18.838481 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-02-10 09:08:18.838927 | orchestrator | Monday 10 February 2025 09:08:18 +0000 (0:00:00.929) 0:05:47.894 ******* 2025-02-10 09:08:31.109966 | orchestrator | ok: [testbed-manager] 2025-02-10 09:08:31.111216 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:08:31.111278 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:08:31.111317 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:08:31.112712 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:08:31.112912 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:08:31.114151 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:08:31.114966 | orchestrator | 2025-02-10 09:08:31.116741 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-02-10 09:08:31.117547 | orchestrator | Monday 10 February 2025 09:08:31 +0000 (0:00:12.271) 0:06:00.165 ******* 2025-02-10 09:08:44.483205 | orchestrator | ok: [testbed-manager] 2025-02-10 09:08:44.485828 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:08:44.485897 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:08:44.485925 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:08:44.486544 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:08:44.486993 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:08:44.487064 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:08:44.487330 | orchestrator | 2025-02-10 09:08:44.487857 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-02-10 09:08:44.488064 | orchestrator | Monday 10 February 2025 09:08:44 +0000 (0:00:13.375) 0:06:13.540 ******* 2025-02-10 09:08:44.856382 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-02-10 09:08:44.949828 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-02-10 09:08:45.765563 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-02-10 09:08:45.765808 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-02-10 09:08:45.766537 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-02-10 09:08:45.766649 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-02-10 09:08:45.768102 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-02-10 09:08:45.768474 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-02-10 09:08:45.768828 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-02-10 09:08:45.769025 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-02-10 09:08:45.769821 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-02-10 09:08:45.769897 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-02-10 09:08:45.770507 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-02-10 09:08:45.770819 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-02-10 09:08:45.771154 | orchestrator | 2025-02-10 09:08:45.771878 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-02-10 09:08:45.772208 | orchestrator | Monday 10 February 2025 09:08:45 +0000 (0:00:01.282) 0:06:14.823 ******* 2025-02-10 09:08:45.920627 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:08:45.984837 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:08:46.057335 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:08:46.125196 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:08:46.188970 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:08:46.321590 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:08:46.323170 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:08:46.325623 | orchestrator | 2025-02-10 09:08:46.325659 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-02-10 09:08:46.325682 | orchestrator | Monday 10 February 2025 09:08:46 +0000 (0:00:00.555) 0:06:15.379 ******* 2025-02-10 09:08:50.483308 | orchestrator | ok: [testbed-manager] 2025-02-10 09:08:50.484180 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:08:50.484219 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:08:50.484237 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:08:50.484249 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:08:50.484261 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:08:50.484281 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:08:50.484558 | orchestrator | 2025-02-10 09:08:50.484594 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-02-10 09:08:50.485825 | orchestrator | Monday 10 February 2025 09:08:50 +0000 (0:00:04.160) 0:06:19.539 ******* 2025-02-10 09:08:50.625560 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:08:50.699978 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:08:50.771219 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:08:50.839675 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:08:50.911038 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:08:51.022456 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:08:51.023224 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:08:51.024052 | orchestrator | 2025-02-10 09:08:51.025224 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-02-10 09:08:51.026102 | orchestrator | Monday 10 February 2025 09:08:51 +0000 (0:00:00.542) 0:06:20.081 ******* 2025-02-10 09:08:51.116408 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-02-10 09:08:51.116646 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-02-10 09:08:51.229895 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:08:51.230550 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-02-10 09:08:51.231108 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-02-10 09:08:51.309152 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:08:51.309853 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-02-10 09:08:51.310814 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-02-10 09:08:51.383930 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:08:51.384990 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-02-10 09:08:51.393172 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-02-10 09:08:51.467223 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:08:51.468127 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-02-10 09:08:51.469253 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-02-10 09:08:51.550840 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:08:51.553063 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-02-10 09:08:51.650099 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-02-10 09:08:51.650234 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:08:51.650339 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-02-10 09:08:51.651131 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-02-10 09:08:51.651933 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:08:51.652470 | orchestrator | 2025-02-10 09:08:51.652935 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-02-10 09:08:51.655920 | orchestrator | Monday 10 February 2025 09:08:51 +0000 (0:00:00.628) 0:06:20.710 ******* 2025-02-10 09:08:51.789806 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:08:51.859925 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:08:51.944941 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:08:52.016946 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:08:52.084272 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:08:52.193173 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:08:52.193670 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:08:52.194829 | orchestrator | 2025-02-10 09:08:52.196130 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-02-10 09:08:52.197969 | orchestrator | Monday 10 February 2025 09:08:52 +0000 (0:00:00.544) 0:06:21.254 ******* 2025-02-10 09:08:52.329322 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:08:52.404101 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:08:52.469011 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:08:52.537070 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:08:52.606888 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:08:52.723626 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:08:52.724758 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:08:52.724789 | orchestrator | 2025-02-10 09:08:52.725729 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-02-10 09:08:52.729171 | orchestrator | Monday 10 February 2025 09:08:52 +0000 (0:00:00.528) 0:06:21.783 ******* 2025-02-10 09:08:52.863302 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:08:52.930896 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:08:53.006622 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:08:53.251667 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:08:53.314907 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:08:53.434108 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:08:53.434533 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:08:53.435545 | orchestrator | 2025-02-10 09:08:53.436269 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-02-10 09:08:53.437509 | orchestrator | Monday 10 February 2025 09:08:53 +0000 (0:00:00.710) 0:06:22.494 ******* 2025-02-10 09:08:59.429532 | orchestrator | ok: [testbed-manager] 2025-02-10 09:08:59.430006 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:08:59.431909 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:08:59.435034 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:08:59.435256 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:08:59.435273 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:08:59.435283 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:08:59.435296 | orchestrator | 2025-02-10 09:08:59.436037 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-02-10 09:08:59.436870 | orchestrator | Monday 10 February 2025 09:08:59 +0000 (0:00:05.996) 0:06:28.490 ******* 2025-02-10 09:09:00.307607 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:09:00.308239 | orchestrator | 2025-02-10 09:09:00.309270 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-02-10 09:09:00.310120 | orchestrator | Monday 10 February 2025 09:09:00 +0000 (0:00:00.876) 0:06:29.366 ******* 2025-02-10 09:09:00.754805 | orchestrator | ok: [testbed-manager] 2025-02-10 09:09:01.183116 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:09:01.184141 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:09:01.184212 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:09:01.184714 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:09:01.185994 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:09:01.186639 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:09:01.187542 | orchestrator | 2025-02-10 09:09:01.188729 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-02-10 09:09:01.190131 | orchestrator | Monday 10 February 2025 09:09:01 +0000 (0:00:00.876) 0:06:30.243 ******* 2025-02-10 09:09:01.697723 | orchestrator | ok: [testbed-manager] 2025-02-10 09:09:01.767629 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:09:01.853388 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:09:02.325974 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:09:02.326470 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:09:02.327476 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:09:02.333795 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:09:02.334388 | orchestrator | 2025-02-10 09:09:02.335203 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-02-10 09:09:02.336105 | orchestrator | Monday 10 February 2025 09:09:02 +0000 (0:00:01.141) 0:06:31.385 ******* 2025-02-10 09:09:03.726652 | orchestrator | ok: [testbed-manager] 2025-02-10 09:09:03.728406 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:09:03.728449 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:09:03.728466 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:09:03.728480 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:09:03.728502 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:09:03.728666 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:09:03.728693 | orchestrator | 2025-02-10 09:09:03.729481 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-02-10 09:09:03.731213 | orchestrator | Monday 10 February 2025 09:09:03 +0000 (0:00:01.394) 0:06:32.779 ******* 2025-02-10 09:09:03.877520 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:09:05.110869 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:09:05.111464 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:09:05.111641 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:09:05.111727 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:09:05.112369 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:09:05.114280 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:09:05.114540 | orchestrator | 2025-02-10 09:09:05.114956 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-02-10 09:09:05.116143 | orchestrator | Monday 10 February 2025 09:09:05 +0000 (0:00:01.391) 0:06:34.171 ******* 2025-02-10 09:09:06.459013 | orchestrator | ok: [testbed-manager] 2025-02-10 09:09:06.459526 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:09:06.460174 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:09:06.461381 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:09:06.462329 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:09:06.463387 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:09:06.463791 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:09:06.464045 | orchestrator | 2025-02-10 09:09:06.465019 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-02-10 09:09:06.465404 | orchestrator | Monday 10 February 2025 09:09:06 +0000 (0:00:01.346) 0:06:35.517 ******* 2025-02-10 09:09:07.868022 | orchestrator | changed: [testbed-manager] 2025-02-10 09:09:07.869565 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:09:07.873203 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:09:07.875092 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:09:07.875214 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:09:07.877747 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:09:07.881282 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:09:07.881722 | orchestrator | 2025-02-10 09:09:07.881748 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-02-10 09:09:07.881787 | orchestrator | Monday 10 February 2025 09:09:07 +0000 (0:00:01.410) 0:06:36.927 ******* 2025-02-10 09:09:08.923845 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:09:08.924123 | orchestrator | 2025-02-10 09:09:08.927212 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-02-10 09:09:10.258724 | orchestrator | Monday 10 February 2025 09:09:08 +0000 (0:00:01.056) 0:06:37.984 ******* 2025-02-10 09:09:10.258906 | orchestrator | ok: [testbed-manager] 2025-02-10 09:09:10.259897 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:09:10.260683 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:09:10.261927 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:09:10.263154 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:09:10.263937 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:09:10.264516 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:09:10.265092 | orchestrator | 2025-02-10 09:09:10.265765 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-02-10 09:09:10.266468 | orchestrator | Monday 10 February 2025 09:09:10 +0000 (0:00:01.335) 0:06:39.319 ******* 2025-02-10 09:09:11.393102 | orchestrator | ok: [testbed-manager] 2025-02-10 09:09:11.393292 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:09:11.396610 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:09:11.396817 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:09:11.398961 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:09:11.399987 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:09:11.401426 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:09:11.402646 | orchestrator | 2025-02-10 09:09:11.403449 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-02-10 09:09:11.404373 | orchestrator | Monday 10 February 2025 09:09:11 +0000 (0:00:01.127) 0:06:40.447 ******* 2025-02-10 09:09:12.788991 | orchestrator | ok: [testbed-manager] 2025-02-10 09:09:12.789392 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:09:12.789423 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:09:12.789442 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:09:12.790004 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:09:12.791108 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:09:12.792082 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:09:12.792659 | orchestrator | 2025-02-10 09:09:12.793249 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-02-10 09:09:12.793498 | orchestrator | Monday 10 February 2025 09:09:12 +0000 (0:00:01.398) 0:06:41.845 ******* 2025-02-10 09:09:14.763184 | orchestrator | ok: [testbed-manager] 2025-02-10 09:09:14.763709 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:09:14.763765 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:09:14.764530 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:09:14.764932 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:09:14.765883 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:09:14.766940 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:09:14.767469 | orchestrator | 2025-02-10 09:09:14.767934 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-02-10 09:09:14.769102 | orchestrator | Monday 10 February 2025 09:09:14 +0000 (0:00:01.974) 0:06:43.820 ******* 2025-02-10 09:09:15.964545 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:09:15.964992 | orchestrator | 2025-02-10 09:09:15.966181 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-02-10 09:09:15.968842 | orchestrator | Monday 10 February 2025 09:09:15 +0000 (0:00:00.918) 0:06:44.739 ******* 2025-02-10 09:09:15.969600 | orchestrator | 2025-02-10 09:09:15.969657 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-02-10 09:09:15.971154 | orchestrator | Monday 10 February 2025 09:09:15 +0000 (0:00:00.039) 0:06:44.778 ******* 2025-02-10 09:09:15.971634 | orchestrator | 2025-02-10 09:09:15.972534 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-02-10 09:09:15.973654 | orchestrator | Monday 10 February 2025 09:09:15 +0000 (0:00:00.045) 0:06:44.824 ******* 2025-02-10 09:09:15.974160 | orchestrator | 2025-02-10 09:09:15.974851 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-02-10 09:09:15.975219 | orchestrator | Monday 10 February 2025 09:09:15 +0000 (0:00:00.038) 0:06:44.862 ******* 2025-02-10 09:09:15.975834 | orchestrator | 2025-02-10 09:09:15.976252 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-02-10 09:09:15.976596 | orchestrator | Monday 10 February 2025 09:09:15 +0000 (0:00:00.037) 0:06:44.899 ******* 2025-02-10 09:09:15.977168 | orchestrator | 2025-02-10 09:09:15.978400 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-02-10 09:09:15.978735 | orchestrator | Monday 10 February 2025 09:09:15 +0000 (0:00:00.044) 0:06:44.944 ******* 2025-02-10 09:09:15.978782 | orchestrator | 2025-02-10 09:09:15.979220 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-02-10 09:09:15.979658 | orchestrator | Monday 10 February 2025 09:09:15 +0000 (0:00:00.039) 0:06:44.984 ******* 2025-02-10 09:09:15.980075 | orchestrator | 2025-02-10 09:09:15.980422 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-02-10 09:09:15.981857 | orchestrator | Monday 10 February 2025 09:09:15 +0000 (0:00:00.038) 0:06:45.022 ******* 2025-02-10 09:09:17.510230 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:09:17.512005 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:09:17.512106 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:09:17.512380 | orchestrator | 2025-02-10 09:09:17.514921 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-02-10 09:09:18.959085 | orchestrator | Monday 10 February 2025 09:09:17 +0000 (0:00:01.541) 0:06:46.563 ******* 2025-02-10 09:09:18.959275 | orchestrator | changed: [testbed-manager] 2025-02-10 09:09:18.959388 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:09:18.959538 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:09:18.959559 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:09:18.962194 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:09:18.962528 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:09:18.962924 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:09:18.963263 | orchestrator | 2025-02-10 09:09:18.966777 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-02-10 09:09:20.146112 | orchestrator | Monday 10 February 2025 09:09:18 +0000 (0:00:01.455) 0:06:48.018 ******* 2025-02-10 09:09:20.146270 | orchestrator | changed: [testbed-manager] 2025-02-10 09:09:20.146398 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:09:20.147440 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:09:20.149783 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:09:20.149821 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:09:20.151422 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:09:20.151705 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:09:20.151736 | orchestrator | 2025-02-10 09:09:20.152514 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-02-10 09:09:20.153007 | orchestrator | Monday 10 February 2025 09:09:20 +0000 (0:00:01.186) 0:06:49.204 ******* 2025-02-10 09:09:20.280921 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:09:22.290886 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:09:22.292276 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:09:22.292321 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:09:22.295103 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:09:22.295776 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:09:22.296470 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:09:22.297164 | orchestrator | 2025-02-10 09:09:22.298579 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-02-10 09:09:22.300502 | orchestrator | Monday 10 February 2025 09:09:22 +0000 (0:00:02.144) 0:06:51.349 ******* 2025-02-10 09:09:22.390836 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:09:22.391135 | orchestrator | 2025-02-10 09:09:22.391945 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-02-10 09:09:22.392534 | orchestrator | Monday 10 February 2025 09:09:22 +0000 (0:00:00.100) 0:06:51.449 ******* 2025-02-10 09:09:23.430158 | orchestrator | ok: [testbed-manager] 2025-02-10 09:09:23.430747 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:09:23.430787 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:09:23.430802 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:09:23.430822 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:09:23.431627 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:09:23.433511 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:09:23.434479 | orchestrator | 2025-02-10 09:09:23.435814 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-02-10 09:09:23.437212 | orchestrator | Monday 10 February 2025 09:09:23 +0000 (0:00:01.032) 0:06:52.482 ******* 2025-02-10 09:09:23.735456 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:09:23.883230 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:09:23.951414 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:09:24.019306 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:09:24.152239 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:09:24.153052 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:09:24.153974 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:09:24.154825 | orchestrator | 2025-02-10 09:09:24.155993 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-02-10 09:09:24.156836 | orchestrator | Monday 10 February 2025 09:09:24 +0000 (0:00:00.730) 0:06:53.213 ******* 2025-02-10 09:09:25.041296 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:09:25.041667 | orchestrator | 2025-02-10 09:09:25.043237 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-02-10 09:09:25.043921 | orchestrator | Monday 10 February 2025 09:09:25 +0000 (0:00:00.885) 0:06:54.098 ******* 2025-02-10 09:09:25.483999 | orchestrator | ok: [testbed-manager] 2025-02-10 09:09:25.920121 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:09:25.920339 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:09:25.920801 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:09:25.921907 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:09:25.923227 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:09:25.924834 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:09:25.925401 | orchestrator | 2025-02-10 09:09:25.926190 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-02-10 09:09:25.930179 | orchestrator | Monday 10 February 2025 09:09:25 +0000 (0:00:00.882) 0:06:54.980 ******* 2025-02-10 09:09:28.632887 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-02-10 09:09:28.634257 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-02-10 09:09:28.634289 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-02-10 09:09:28.634304 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-02-10 09:09:28.634865 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-02-10 09:09:28.635620 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-02-10 09:09:28.636660 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-02-10 09:09:28.637508 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-02-10 09:09:28.638512 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-02-10 09:09:28.639283 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-02-10 09:09:28.640060 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-02-10 09:09:28.640362 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-02-10 09:09:28.640876 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-02-10 09:09:28.641761 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-02-10 09:09:28.642125 | orchestrator | 2025-02-10 09:09:28.642833 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-02-10 09:09:28.643415 | orchestrator | Monday 10 February 2025 09:09:28 +0000 (0:00:02.706) 0:06:57.687 ******* 2025-02-10 09:09:28.764060 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:09:28.827026 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:09:28.910143 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:09:28.973909 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:09:29.037761 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:09:29.140488 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:09:29.140846 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:09:29.143988 | orchestrator | 2025-02-10 09:09:30.003614 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-02-10 09:09:30.003787 | orchestrator | Monday 10 February 2025 09:09:29 +0000 (0:00:00.512) 0:06:58.200 ******* 2025-02-10 09:09:30.003841 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:09:30.007411 | orchestrator | 2025-02-10 09:09:30.007749 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-02-10 09:09:30.008444 | orchestrator | Monday 10 February 2025 09:09:29 +0000 (0:00:00.860) 0:06:59.060 ******* 2025-02-10 09:09:30.596455 | orchestrator | ok: [testbed-manager] 2025-02-10 09:09:30.667547 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:09:31.104992 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:09:31.105181 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:09:31.106684 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:09:31.107731 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:09:31.108767 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:09:31.109148 | orchestrator | 2025-02-10 09:09:31.109940 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-02-10 09:09:31.110642 | orchestrator | Monday 10 February 2025 09:09:31 +0000 (0:00:01.103) 0:07:00.164 ******* 2025-02-10 09:09:31.530507 | orchestrator | ok: [testbed-manager] 2025-02-10 09:09:31.943272 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:09:31.943552 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:09:31.943577 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:09:31.943612 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:09:31.943718 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:09:31.944505 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:09:31.945425 | orchestrator | 2025-02-10 09:09:31.946549 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-02-10 09:09:31.946836 | orchestrator | Monday 10 February 2025 09:09:31 +0000 (0:00:00.833) 0:07:00.997 ******* 2025-02-10 09:09:32.083870 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:09:32.150757 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:09:32.215448 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:09:32.301300 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:09:32.364546 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:09:32.463934 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:09:32.464313 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:09:32.465588 | orchestrator | 2025-02-10 09:09:32.469043 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-02-10 09:09:33.906321 | orchestrator | Monday 10 February 2025 09:09:32 +0000 (0:00:00.525) 0:07:01.523 ******* 2025-02-10 09:09:33.906575 | orchestrator | ok: [testbed-manager] 2025-02-10 09:09:33.907625 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:09:33.907713 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:09:33.908442 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:09:33.910186 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:09:33.911002 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:09:33.911615 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:09:33.914983 | orchestrator | 2025-02-10 09:09:34.034089 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-02-10 09:09:34.034180 | orchestrator | Monday 10 February 2025 09:09:33 +0000 (0:00:01.442) 0:07:02.965 ******* 2025-02-10 09:09:34.034210 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:09:34.109235 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:09:34.173179 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:09:34.241953 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:09:34.313707 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:09:34.400989 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:09:34.401165 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:09:34.401954 | orchestrator | 2025-02-10 09:09:34.402446 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-02-10 09:09:34.403120 | orchestrator | Monday 10 February 2025 09:09:34 +0000 (0:00:00.495) 0:07:03.461 ******* 2025-02-10 09:09:36.465083 | orchestrator | ok: [testbed-manager] 2025-02-10 09:09:36.466297 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:09:36.469123 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:09:36.469767 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:09:36.470424 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:09:36.470958 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:09:36.474496 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:09:36.475025 | orchestrator | 2025-02-10 09:09:36.475776 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-02-10 09:09:36.476219 | orchestrator | Monday 10 February 2025 09:09:36 +0000 (0:00:02.062) 0:07:05.523 ******* 2025-02-10 09:09:37.857019 | orchestrator | ok: [testbed-manager] 2025-02-10 09:09:37.859191 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:09:37.860271 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:09:37.860313 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:09:37.860338 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:09:37.860425 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:09:37.860504 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:09:37.861185 | orchestrator | 2025-02-10 09:09:37.861683 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-02-10 09:09:37.862520 | orchestrator | Monday 10 February 2025 09:09:37 +0000 (0:00:01.391) 0:07:06.914 ******* 2025-02-10 09:09:39.635457 | orchestrator | ok: [testbed-manager] 2025-02-10 09:09:39.635672 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:09:39.636455 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:09:39.637912 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:09:39.640042 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:09:39.640817 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:09:39.641425 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:09:39.643754 | orchestrator | 2025-02-10 09:09:39.644538 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-02-10 09:09:39.645419 | orchestrator | Monday 10 February 2025 09:09:39 +0000 (0:00:01.778) 0:07:08.693 ******* 2025-02-10 09:09:41.533720 | orchestrator | ok: [testbed-manager] 2025-02-10 09:09:41.533907 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:09:41.535224 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:09:41.536775 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:09:41.537268 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:09:41.537838 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:09:41.538866 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:09:41.539990 | orchestrator | 2025-02-10 09:09:41.540413 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-02-10 09:09:41.541144 | orchestrator | Monday 10 February 2025 09:09:41 +0000 (0:00:01.898) 0:07:10.592 ******* 2025-02-10 09:09:41.956491 | orchestrator | ok: [testbed-manager] 2025-02-10 09:09:42.399075 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:09:42.399706 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:09:42.401403 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:09:42.402641 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:09:42.403672 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:09:42.404964 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:09:42.406008 | orchestrator | 2025-02-10 09:09:42.406766 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-02-10 09:09:42.407598 | orchestrator | Monday 10 February 2025 09:09:42 +0000 (0:00:00.862) 0:07:11.455 ******* 2025-02-10 09:09:42.548750 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:09:42.618541 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:09:42.688632 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:09:42.753110 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:09:42.814441 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:09:43.229743 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:09:43.230076 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:09:43.230952 | orchestrator | 2025-02-10 09:09:43.231907 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-02-10 09:09:43.235923 | orchestrator | Monday 10 February 2025 09:09:43 +0000 (0:00:00.835) 0:07:12.291 ******* 2025-02-10 09:09:43.372527 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:09:43.437395 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:09:43.505527 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:09:43.579554 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:09:43.641632 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:09:43.747581 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:09:43.751106 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:09:43.751170 | orchestrator | 2025-02-10 09:09:43.882661 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-02-10 09:09:43.882803 | orchestrator | Monday 10 February 2025 09:09:43 +0000 (0:00:00.514) 0:07:12.805 ******* 2025-02-10 09:09:43.882848 | orchestrator | ok: [testbed-manager] 2025-02-10 09:09:43.947875 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:09:44.020598 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:09:44.264492 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:09:44.328614 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:09:44.430868 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:09:44.431277 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:09:44.431311 | orchestrator | 2025-02-10 09:09:44.431721 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-02-10 09:09:44.435086 | orchestrator | Monday 10 February 2025 09:09:44 +0000 (0:00:00.684) 0:07:13.489 ******* 2025-02-10 09:09:44.567975 | orchestrator | ok: [testbed-manager] 2025-02-10 09:09:44.633064 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:09:44.704253 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:09:44.767128 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:09:44.830119 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:09:44.956745 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:09:44.956970 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:09:44.957524 | orchestrator | 2025-02-10 09:09:44.958571 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-02-10 09:09:44.958879 | orchestrator | Monday 10 February 2025 09:09:44 +0000 (0:00:00.525) 0:07:14.015 ******* 2025-02-10 09:09:45.086387 | orchestrator | ok: [testbed-manager] 2025-02-10 09:09:45.156727 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:09:45.218406 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:09:45.282308 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:09:45.371019 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:09:45.483209 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:09:45.484159 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:09:45.486769 | orchestrator | 2025-02-10 09:09:45.493073 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-02-10 09:09:45.493144 | orchestrator | Monday 10 February 2025 09:09:45 +0000 (0:00:00.527) 0:07:14.542 ******* 2025-02-10 09:09:51.257846 | orchestrator | ok: [testbed-manager] 2025-02-10 09:09:51.261188 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:09:51.261291 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:09:51.264862 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:09:51.266540 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:09:51.267151 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:09:51.267793 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:09:51.268573 | orchestrator | 2025-02-10 09:09:51.271731 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-02-10 09:09:51.271819 | orchestrator | Monday 10 February 2025 09:09:51 +0000 (0:00:05.774) 0:07:20.317 ******* 2025-02-10 09:09:51.395850 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:09:51.459261 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:09:51.532690 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:09:51.596326 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:09:51.653746 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:09:51.979620 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:09:51.980401 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:09:51.981240 | orchestrator | 2025-02-10 09:09:51.982135 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-02-10 09:09:51.983085 | orchestrator | Monday 10 February 2025 09:09:51 +0000 (0:00:00.722) 0:07:21.039 ******* 2025-02-10 09:09:52.793468 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:09:52.793736 | orchestrator | 2025-02-10 09:09:52.794394 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-02-10 09:09:52.798244 | orchestrator | Monday 10 February 2025 09:09:52 +0000 (0:00:00.812) 0:07:21.852 ******* 2025-02-10 09:09:54.557737 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:09:54.559217 | orchestrator | ok: [testbed-manager] 2025-02-10 09:09:54.560155 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:09:54.561561 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:09:54.562399 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:09:54.563028 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:09:54.563732 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:09:54.563974 | orchestrator | 2025-02-10 09:09:54.564540 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-02-10 09:09:54.564931 | orchestrator | Monday 10 February 2025 09:09:54 +0000 (0:00:01.763) 0:07:23.615 ******* 2025-02-10 09:09:55.664193 | orchestrator | ok: [testbed-manager] 2025-02-10 09:09:55.665781 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:09:55.665800 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:09:55.666651 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:09:55.667635 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:09:55.668570 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:09:55.668850 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:09:55.669337 | orchestrator | 2025-02-10 09:09:55.669782 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-02-10 09:09:55.670253 | orchestrator | Monday 10 February 2025 09:09:55 +0000 (0:00:01.106) 0:07:24.722 ******* 2025-02-10 09:09:56.256735 | orchestrator | ok: [testbed-manager] 2025-02-10 09:09:56.324581 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:09:56.776902 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:09:56.777828 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:09:56.777885 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:09:56.779009 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:09:56.781906 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:09:58.495190 | orchestrator | 2025-02-10 09:09:58.495335 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-02-10 09:09:58.495410 | orchestrator | Monday 10 February 2025 09:09:56 +0000 (0:00:01.113) 0:07:25.835 ******* 2025-02-10 09:09:58.495444 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-02-10 09:09:58.495534 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-02-10 09:09:58.496380 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-02-10 09:09:58.497587 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-02-10 09:09:58.498776 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-02-10 09:09:58.499955 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-02-10 09:09:58.500327 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-02-10 09:09:58.501139 | orchestrator | 2025-02-10 09:09:58.501896 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-02-10 09:09:58.502778 | orchestrator | Monday 10 February 2025 09:09:58 +0000 (0:00:01.716) 0:07:27.552 ******* 2025-02-10 09:09:59.292918 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:09:59.295196 | orchestrator | 2025-02-10 09:09:59.296759 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-02-10 09:09:59.296824 | orchestrator | Monday 10 February 2025 09:09:59 +0000 (0:00:00.800) 0:07:28.352 ******* 2025-02-10 09:10:08.865507 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:10:08.865716 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:10:08.866416 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:10:08.868288 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:10:08.868413 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:10:08.869168 | orchestrator | changed: [testbed-manager] 2025-02-10 09:10:08.870076 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:10:08.871618 | orchestrator | 2025-02-10 09:10:08.872338 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-02-10 09:10:08.874082 | orchestrator | Monday 10 February 2025 09:10:08 +0000 (0:00:09.570) 0:07:37.922 ******* 2025-02-10 09:10:10.807557 | orchestrator | ok: [testbed-manager] 2025-02-10 09:10:10.808834 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:10:10.810380 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:10:10.811657 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:10:10.814582 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:10:10.814725 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:10:10.815933 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:10:10.817157 | orchestrator | 2025-02-10 09:10:10.818774 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-02-10 09:10:10.820224 | orchestrator | Monday 10 February 2025 09:10:10 +0000 (0:00:01.943) 0:07:39.866 ******* 2025-02-10 09:10:12.148817 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:10:12.150136 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:10:12.150189 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:10:12.150216 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:10:12.151192 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:10:12.153049 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:10:12.155054 | orchestrator | 2025-02-10 09:10:12.159285 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-02-10 09:10:12.163737 | orchestrator | Monday 10 February 2025 09:10:12 +0000 (0:00:01.338) 0:07:41.204 ******* 2025-02-10 09:10:13.655126 | orchestrator | changed: [testbed-manager] 2025-02-10 09:10:13.655312 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:10:13.655335 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:10:13.655398 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:10:13.655419 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:10:13.656955 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:10:13.660210 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:10:13.660333 | orchestrator | 2025-02-10 09:10:13.661766 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-02-10 09:10:13.661794 | orchestrator | 2025-02-10 09:10:13.661814 | orchestrator | TASK [Include hardening role] ************************************************** 2025-02-10 09:10:13.662331 | orchestrator | Monday 10 February 2025 09:10:13 +0000 (0:00:01.510) 0:07:42.715 ******* 2025-02-10 09:10:13.788320 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:10:13.847560 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:10:13.906272 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:10:13.971119 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:10:14.032248 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:10:14.150176 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:10:14.150424 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:10:14.150455 | orchestrator | 2025-02-10 09:10:14.150479 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-02-10 09:10:14.150555 | orchestrator | 2025-02-10 09:10:14.150578 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-02-10 09:10:14.150720 | orchestrator | Monday 10 February 2025 09:10:14 +0000 (0:00:00.494) 0:07:43.209 ******* 2025-02-10 09:10:15.556621 | orchestrator | changed: [testbed-manager] 2025-02-10 09:10:15.558866 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:10:15.559013 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:10:15.561472 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:10:15.561759 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:10:15.562150 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:10:15.562527 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:10:15.562937 | orchestrator | 2025-02-10 09:10:15.563268 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-02-10 09:10:15.563817 | orchestrator | Monday 10 February 2025 09:10:15 +0000 (0:00:01.405) 0:07:44.615 ******* 2025-02-10 09:10:17.248687 | orchestrator | ok: [testbed-manager] 2025-02-10 09:10:17.252676 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:10:17.252748 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:10:17.253126 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:10:17.253265 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:10:17.253372 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:10:17.253624 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:10:17.254406 | orchestrator | 2025-02-10 09:10:17.254624 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-02-10 09:10:17.255892 | orchestrator | Monday 10 February 2025 09:10:17 +0000 (0:00:01.691) 0:07:46.307 ******* 2025-02-10 09:10:17.385288 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:10:17.447952 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:10:17.522545 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:10:17.585195 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:10:17.650633 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:10:18.065887 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:10:18.066157 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:10:18.066844 | orchestrator | 2025-02-10 09:10:18.067915 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-02-10 09:10:18.068082 | orchestrator | Monday 10 February 2025 09:10:18 +0000 (0:00:00.820) 0:07:47.127 ******* 2025-02-10 09:10:19.441833 | orchestrator | changed: [testbed-manager] 2025-02-10 09:10:19.443119 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:10:19.443151 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:10:19.444861 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:10:19.445537 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:10:19.447036 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:10:19.448001 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:10:19.449099 | orchestrator | 2025-02-10 09:10:19.449652 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-02-10 09:10:19.450571 | orchestrator | 2025-02-10 09:10:19.451431 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-02-10 09:10:19.452110 | orchestrator | Monday 10 February 2025 09:10:19 +0000 (0:00:01.375) 0:07:48.502 ******* 2025-02-10 09:10:20.418137 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:10:20.418529 | orchestrator | 2025-02-10 09:10:20.419503 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-02-10 09:10:20.422806 | orchestrator | Monday 10 February 2025 09:10:20 +0000 (0:00:00.975) 0:07:49.477 ******* 2025-02-10 09:10:20.840867 | orchestrator | ok: [testbed-manager] 2025-02-10 09:10:21.312622 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:10:21.312879 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:10:21.313489 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:10:21.314163 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:10:21.315570 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:10:21.316036 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:10:21.316088 | orchestrator | 2025-02-10 09:10:21.316565 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-02-10 09:10:21.317009 | orchestrator | Monday 10 February 2025 09:10:21 +0000 (0:00:00.896) 0:07:50.374 ******* 2025-02-10 09:10:22.470193 | orchestrator | changed: [testbed-manager] 2025-02-10 09:10:22.470621 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:10:22.470682 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:10:22.471107 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:10:22.472019 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:10:22.473066 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:10:22.473105 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:10:22.473120 | orchestrator | 2025-02-10 09:10:22.473143 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-02-10 09:10:22.473209 | orchestrator | Monday 10 February 2025 09:10:22 +0000 (0:00:01.154) 0:07:51.528 ******* 2025-02-10 09:10:23.496139 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:10:23.496411 | orchestrator | 2025-02-10 09:10:23.498745 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-02-10 09:10:23.500252 | orchestrator | Monday 10 February 2025 09:10:23 +0000 (0:00:01.025) 0:07:52.554 ******* 2025-02-10 09:10:23.941737 | orchestrator | ok: [testbed-manager] 2025-02-10 09:10:24.374768 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:10:24.374987 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:10:24.377199 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:10:24.379396 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:10:24.380156 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:10:24.381998 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:10:24.383442 | orchestrator | 2025-02-10 09:10:24.384723 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-02-10 09:10:24.386144 | orchestrator | Monday 10 February 2025 09:10:24 +0000 (0:00:00.880) 0:07:53.434 ******* 2025-02-10 09:10:25.481954 | orchestrator | changed: [testbed-manager] 2025-02-10 09:10:25.483262 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:10:25.484097 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:10:25.484756 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:10:25.486119 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:10:25.486248 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:10:25.487287 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:10:25.488131 | orchestrator | 2025-02-10 09:10:25.489033 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:10:25.490110 | orchestrator | 2025-02-10 09:10:25 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:10:25.491002 | orchestrator | 2025-02-10 09:10:25 | INFO  | Please wait and do not abort execution. 2025-02-10 09:10:25.491031 | orchestrator | testbed-manager : ok=160  changed=37  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-02-10 09:10:25.491700 | orchestrator | testbed-node-0 : ok=168  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-02-10 09:10:25.493181 | orchestrator | testbed-node-1 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-02-10 09:10:25.493874 | orchestrator | testbed-node-2 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-02-10 09:10:25.494581 | orchestrator | testbed-node-3 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-02-10 09:10:25.495489 | orchestrator | testbed-node-4 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-02-10 09:10:25.495690 | orchestrator | testbed-node-5 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-02-10 09:10:25.496438 | orchestrator | 2025-02-10 09:10:25.496998 | orchestrator | Monday 10 February 2025 09:10:25 +0000 (0:00:01.108) 0:07:54.542 ******* 2025-02-10 09:10:25.498121 | orchestrator | =============================================================================== 2025-02-10 09:10:25.498599 | orchestrator | osism.commons.packages : Install required packages --------------------- 75.99s 2025-02-10 09:10:25.498753 | orchestrator | osism.commons.packages : Download required packages -------------------- 33.85s 2025-02-10 09:10:25.499551 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.07s 2025-02-10 09:10:25.500579 | orchestrator | osism.commons.repository : Update package cache ------------------------ 14.06s 2025-02-10 09:10:25.501094 | orchestrator | osism.services.docker : Install docker package ------------------------- 13.38s 2025-02-10 09:10:25.501979 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.08s 2025-02-10 09:10:25.502638 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 12.27s 2025-02-10 09:10:25.503243 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.21s 2025-02-10 09:10:25.503557 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.57s 2025-02-10 09:10:25.504004 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.74s 2025-02-10 09:10:25.504995 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.24s 2025-02-10 09:10:25.505327 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.97s 2025-02-10 09:10:25.506133 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.94s 2025-02-10 09:10:25.506790 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.68s 2025-02-10 09:10:25.507266 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required --- 6.40s 2025-02-10 09:10:25.507902 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.13s 2025-02-10 09:10:25.509014 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 6.01s 2025-02-10 09:10:25.509880 | orchestrator | osism.services.docker : Ensure that some packages are not installed ----- 6.00s 2025-02-10 09:10:25.509921 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.77s 2025-02-10 09:10:26.241280 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.71s 2025-02-10 09:10:26.241494 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-02-10 09:10:27.697877 | orchestrator | + osism apply network 2025-02-10 09:10:27.698003 | orchestrator | 2025-02-10 09:10:27 | INFO  | Task 4b438cc5-34d8-41bf-8549-a98354591aa0 (network) was prepared for execution. 2025-02-10 09:10:30.809257 | orchestrator | 2025-02-10 09:10:27 | INFO  | It takes a moment until task 4b438cc5-34d8-41bf-8549-a98354591aa0 (network) has been started and output is visible here. 2025-02-10 09:10:30.809484 | orchestrator | 2025-02-10 09:10:30.809588 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-02-10 09:10:30.809610 | orchestrator | 2025-02-10 09:10:30.809644 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-02-10 09:10:30.810757 | orchestrator | Monday 10 February 2025 09:10:30 +0000 (0:00:00.226) 0:00:00.226 ******* 2025-02-10 09:10:30.952868 | orchestrator | ok: [testbed-manager] 2025-02-10 09:10:31.035185 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:10:31.112239 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:10:31.188271 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:10:31.263222 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:10:31.506112 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:10:31.506271 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:10:31.507404 | orchestrator | 2025-02-10 09:10:31.508177 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-02-10 09:10:31.511428 | orchestrator | Monday 10 February 2025 09:10:31 +0000 (0:00:00.700) 0:00:00.926 ******* 2025-02-10 09:10:32.763898 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:10:32.764793 | orchestrator | 2025-02-10 09:10:32.764844 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-02-10 09:10:32.765582 | orchestrator | Monday 10 February 2025 09:10:32 +0000 (0:00:01.255) 0:00:02.182 ******* 2025-02-10 09:10:34.690457 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:10:34.690853 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:10:34.690885 | orchestrator | ok: [testbed-manager] 2025-02-10 09:10:34.690907 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:10:34.691152 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:10:34.692189 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:10:34.693685 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:10:34.694101 | orchestrator | 2025-02-10 09:10:34.695032 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-02-10 09:10:34.695509 | orchestrator | Monday 10 February 2025 09:10:34 +0000 (0:00:01.925) 0:00:04.108 ******* 2025-02-10 09:10:36.429024 | orchestrator | ok: [testbed-manager] 2025-02-10 09:10:36.429809 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:10:36.429966 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:10:36.430917 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:10:36.434692 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:10:36.434789 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:10:36.434808 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:10:36.434823 | orchestrator | 2025-02-10 09:10:36.434844 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-02-10 09:10:36.435450 | orchestrator | Monday 10 February 2025 09:10:36 +0000 (0:00:01.738) 0:00:05.846 ******* 2025-02-10 09:10:36.978005 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-02-10 09:10:37.624529 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-02-10 09:10:37.624732 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-02-10 09:10:37.625693 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-02-10 09:10:37.626295 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-02-10 09:10:37.627767 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-02-10 09:10:37.628970 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-02-10 09:10:37.630616 | orchestrator | 2025-02-10 09:10:37.631946 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-02-10 09:10:37.632740 | orchestrator | Monday 10 February 2025 09:10:37 +0000 (0:00:01.198) 0:00:07.045 ******* 2025-02-10 09:10:39.443995 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-10 09:10:39.444194 | orchestrator | ok: [testbed-manager -> localhost] 2025-02-10 09:10:39.444224 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-02-10 09:10:39.446459 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-02-10 09:10:39.449403 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-02-10 09:10:39.449428 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-02-10 09:10:39.449443 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-02-10 09:10:39.449458 | orchestrator | 2025-02-10 09:10:39.449475 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-02-10 09:10:39.449495 | orchestrator | Monday 10 February 2025 09:10:39 +0000 (0:00:01.820) 0:00:08.866 ******* 2025-02-10 09:10:41.159556 | orchestrator | changed: [testbed-manager] 2025-02-10 09:10:41.163788 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:10:41.164990 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:10:41.167028 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:10:41.167857 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:10:41.170408 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:10:41.173040 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:10:41.173329 | orchestrator | 2025-02-10 09:10:41.173902 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-02-10 09:10:41.175007 | orchestrator | Monday 10 February 2025 09:10:41 +0000 (0:00:01.710) 0:00:10.576 ******* 2025-02-10 09:10:41.717841 | orchestrator | ok: [testbed-manager -> localhost] 2025-02-10 09:10:42.189568 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-10 09:10:42.190505 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-02-10 09:10:42.190546 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-02-10 09:10:42.191458 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-02-10 09:10:42.192187 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-02-10 09:10:42.192859 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-02-10 09:10:42.193371 | orchestrator | 2025-02-10 09:10:42.194173 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-02-10 09:10:42.194650 | orchestrator | Monday 10 February 2025 09:10:42 +0000 (0:00:01.035) 0:00:11.611 ******* 2025-02-10 09:10:42.663382 | orchestrator | ok: [testbed-manager] 2025-02-10 09:10:42.756774 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:10:43.350476 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:10:43.351283 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:10:43.351990 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:10:43.353374 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:10:43.354238 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:10:43.354744 | orchestrator | 2025-02-10 09:10:43.355584 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-02-10 09:10:43.356370 | orchestrator | Monday 10 February 2025 09:10:43 +0000 (0:00:01.156) 0:00:12.768 ******* 2025-02-10 09:10:43.547691 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:10:43.630288 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:10:43.708089 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:10:43.782657 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:10:43.853473 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:10:44.197723 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:10:44.198242 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:10:44.198680 | orchestrator | 2025-02-10 09:10:44.200794 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-02-10 09:10:44.200971 | orchestrator | Monday 10 February 2025 09:10:44 +0000 (0:00:00.848) 0:00:13.616 ******* 2025-02-10 09:10:46.255916 | orchestrator | ok: [testbed-manager] 2025-02-10 09:10:46.256279 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:10:46.256373 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:10:46.259858 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:10:46.259967 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:10:48.384995 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:10:48.385127 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:10:48.385139 | orchestrator | 2025-02-10 09:10:48.385161 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-02-10 09:10:48.385169 | orchestrator | Monday 10 February 2025 09:10:46 +0000 (0:00:02.060) 0:00:15.677 ******* 2025-02-10 09:10:48.385189 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-02-10 09:10:48.388090 | orchestrator | changed: [testbed-node-0] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-02-10 09:10:48.388107 | orchestrator | changed: [testbed-node-1] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-02-10 09:10:48.388810 | orchestrator | changed: [testbed-node-2] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-02-10 09:10:48.389929 | orchestrator | changed: [testbed-node-3] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-02-10 09:10:48.391846 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-02-10 09:10:48.392638 | orchestrator | changed: [testbed-node-4] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-02-10 09:10:48.393497 | orchestrator | changed: [testbed-node-5] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-02-10 09:10:48.394253 | orchestrator | 2025-02-10 09:10:48.394952 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-02-10 09:10:48.395883 | orchestrator | Monday 10 February 2025 09:10:48 +0000 (0:00:02.123) 0:00:17.800 ******* 2025-02-10 09:10:49.956289 | orchestrator | ok: [testbed-manager] 2025-02-10 09:10:49.956536 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:10:49.957617 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:10:49.958436 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:10:49.958980 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:10:49.960140 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:10:49.960697 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:10:49.960719 | orchestrator | 2025-02-10 09:10:49.961577 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-02-10 09:10:49.962135 | orchestrator | Monday 10 February 2025 09:10:49 +0000 (0:00:01.577) 0:00:19.378 ******* 2025-02-10 09:10:51.516228 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:10:51.521420 | orchestrator | 2025-02-10 09:10:51.522499 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-02-10 09:10:51.523404 | orchestrator | Monday 10 February 2025 09:10:51 +0000 (0:00:01.553) 0:00:20.932 ******* 2025-02-10 09:10:52.112577 | orchestrator | ok: [testbed-manager] 2025-02-10 09:10:52.565946 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:10:52.566315 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:10:52.567496 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:10:52.568089 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:10:52.568841 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:10:52.570699 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:10:52.571000 | orchestrator | 2025-02-10 09:10:52.571562 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-02-10 09:10:52.571965 | orchestrator | Monday 10 February 2025 09:10:52 +0000 (0:00:01.054) 0:00:21.986 ******* 2025-02-10 09:10:52.731893 | orchestrator | ok: [testbed-manager] 2025-02-10 09:10:52.810467 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:10:53.068868 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:10:53.152822 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:10:53.237575 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:10:53.388706 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:10:53.388870 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:10:53.390832 | orchestrator | 2025-02-10 09:10:53.391039 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-02-10 09:10:53.391934 | orchestrator | Monday 10 February 2025 09:10:53 +0000 (0:00:00.820) 0:00:22.806 ******* 2025-02-10 09:10:53.871288 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-02-10 09:10:53.872520 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-02-10 09:10:54.452631 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-02-10 09:10:54.453083 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-02-10 09:10:54.457639 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-02-10 09:10:54.458391 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-02-10 09:10:54.458420 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-02-10 09:10:54.458444 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-02-10 09:10:54.459586 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-02-10 09:10:54.460470 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-02-10 09:10:54.461450 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-02-10 09:10:54.461940 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-02-10 09:10:54.462671 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-02-10 09:10:54.463564 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-02-10 09:10:54.464417 | orchestrator | 2025-02-10 09:10:54.464902 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-02-10 09:10:54.465550 | orchestrator | Monday 10 February 2025 09:10:54 +0000 (0:00:01.068) 0:00:23.874 ******* 2025-02-10 09:10:54.781503 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:10:54.865881 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:10:54.948184 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:10:55.032979 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:10:55.115996 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:10:56.280596 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:10:56.280886 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:10:56.281601 | orchestrator | 2025-02-10 09:10:56.282411 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-02-10 09:10:56.283497 | orchestrator | Monday 10 February 2025 09:10:56 +0000 (0:00:01.826) 0:00:25.701 ******* 2025-02-10 09:10:56.451522 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:10:56.533312 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:10:56.799521 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:10:56.880541 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:10:56.961632 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:10:56.999039 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:10:56.999187 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:10:56.999782 | orchestrator | 2025-02-10 09:10:57.000321 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:10:57.000447 | orchestrator | 2025-02-10 09:10:56 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:10:57.000505 | orchestrator | 2025-02-10 09:10:56 | INFO  | Please wait and do not abort execution. 2025-02-10 09:10:57.001173 | orchestrator | testbed-manager : ok=16  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-10 09:10:57.001814 | orchestrator | testbed-node-0 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-10 09:10:57.002858 | orchestrator | testbed-node-1 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-10 09:10:57.003279 | orchestrator | testbed-node-2 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-10 09:10:57.003809 | orchestrator | testbed-node-3 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-10 09:10:57.004810 | orchestrator | testbed-node-4 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-10 09:10:57.005252 | orchestrator | testbed-node-5 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-10 09:10:57.005909 | orchestrator | 2025-02-10 09:10:57.006293 | orchestrator | Monday 10 February 2025 09:10:56 +0000 (0:00:00.721) 0:00:26.423 ******* 2025-02-10 09:10:57.006833 | orchestrator | =============================================================================== 2025-02-10 09:10:57.007649 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 2.12s 2025-02-10 09:10:57.008409 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.06s 2025-02-10 09:10:57.008587 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.93s 2025-02-10 09:10:57.009548 | orchestrator | osism.commons.network : Include dummy interfaces ------------------------ 1.83s 2025-02-10 09:10:57.009705 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 1.82s 2025-02-10 09:10:57.010631 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.74s 2025-02-10 09:10:57.010749 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.71s 2025-02-10 09:10:57.011308 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.58s 2025-02-10 09:10:57.011852 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.55s 2025-02-10 09:10:57.012292 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.26s 2025-02-10 09:10:57.012730 | orchestrator | osism.commons.network : Create required directories --------------------- 1.20s 2025-02-10 09:10:57.012915 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.16s 2025-02-10 09:10:57.013648 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.07s 2025-02-10 09:10:57.013999 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.05s 2025-02-10 09:10:57.014272 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.04s 2025-02-10 09:10:57.014990 | orchestrator | osism.commons.network : Copy interfaces file ---------------------------- 0.85s 2025-02-10 09:10:57.015467 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.82s 2025-02-10 09:10:57.015513 | orchestrator | osism.commons.network : Netplan configuration changed ------------------- 0.72s 2025-02-10 09:10:57.015856 | orchestrator | osism.commons.network : Gather variables for each operating system ------ 0.70s 2025-02-10 09:10:57.521448 | orchestrator | + osism apply wireguard 2025-02-10 09:10:58.956147 | orchestrator | 2025-02-10 09:10:58 | INFO  | Task 41af479e-6fdf-4f8e-b918-3adac2267166 (wireguard) was prepared for execution. 2025-02-10 09:11:02.141791 | orchestrator | 2025-02-10 09:10:58 | INFO  | It takes a moment until task 41af479e-6fdf-4f8e-b918-3adac2267166 (wireguard) has been started and output is visible here. 2025-02-10 09:11:02.141930 | orchestrator | 2025-02-10 09:11:02.141987 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-02-10 09:11:02.143323 | orchestrator | 2025-02-10 09:11:02.143578 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-02-10 09:11:02.143600 | orchestrator | Monday 10 February 2025 09:11:02 +0000 (0:00:00.174) 0:00:00.174 ******* 2025-02-10 09:11:03.800922 | orchestrator | ok: [testbed-manager] 2025-02-10 09:11:03.802318 | orchestrator | 2025-02-10 09:11:03.802486 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-02-10 09:11:03.803243 | orchestrator | Monday 10 February 2025 09:11:03 +0000 (0:00:01.660) 0:00:01.834 ******* 2025-02-10 09:11:10.381203 | orchestrator | changed: [testbed-manager] 2025-02-10 09:11:10.381453 | orchestrator | 2025-02-10 09:11:10.382222 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-02-10 09:11:10.384224 | orchestrator | Monday 10 February 2025 09:11:10 +0000 (0:00:06.579) 0:00:08.413 ******* 2025-02-10 09:11:10.977224 | orchestrator | changed: [testbed-manager] 2025-02-10 09:11:10.978537 | orchestrator | 2025-02-10 09:11:10.979388 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-02-10 09:11:10.980688 | orchestrator | Monday 10 February 2025 09:11:10 +0000 (0:00:00.598) 0:00:09.012 ******* 2025-02-10 09:11:11.475396 | orchestrator | changed: [testbed-manager] 2025-02-10 09:11:11.476059 | orchestrator | 2025-02-10 09:11:11.477276 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-02-10 09:11:11.477894 | orchestrator | Monday 10 February 2025 09:11:11 +0000 (0:00:00.494) 0:00:09.507 ******* 2025-02-10 09:11:11.987999 | orchestrator | ok: [testbed-manager] 2025-02-10 09:11:11.988210 | orchestrator | 2025-02-10 09:11:11.989920 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-02-10 09:11:11.990393 | orchestrator | Monday 10 February 2025 09:11:11 +0000 (0:00:00.515) 0:00:10.022 ******* 2025-02-10 09:11:12.598541 | orchestrator | ok: [testbed-manager] 2025-02-10 09:11:12.598779 | orchestrator | 2025-02-10 09:11:12.599789 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-02-10 09:11:12.600177 | orchestrator | Monday 10 February 2025 09:11:12 +0000 (0:00:00.609) 0:00:10.632 ******* 2025-02-10 09:11:13.049889 | orchestrator | ok: [testbed-manager] 2025-02-10 09:11:13.050858 | orchestrator | 2025-02-10 09:11:13.051527 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-02-10 09:11:13.052924 | orchestrator | Monday 10 February 2025 09:11:13 +0000 (0:00:00.452) 0:00:11.085 ******* 2025-02-10 09:11:14.374622 | orchestrator | changed: [testbed-manager] 2025-02-10 09:11:14.375605 | orchestrator | 2025-02-10 09:11:14.377281 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-02-10 09:11:14.377700 | orchestrator | Monday 10 February 2025 09:11:14 +0000 (0:00:01.323) 0:00:12.408 ******* 2025-02-10 09:11:15.338328 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-10 09:11:15.339075 | orchestrator | changed: [testbed-manager] 2025-02-10 09:11:15.339604 | orchestrator | 2025-02-10 09:11:15.340418 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-02-10 09:11:15.340986 | orchestrator | Monday 10 February 2025 09:11:15 +0000 (0:00:00.963) 0:00:13.372 ******* 2025-02-10 09:11:17.198288 | orchestrator | changed: [testbed-manager] 2025-02-10 09:11:17.199179 | orchestrator | 2025-02-10 09:11:17.199612 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-02-10 09:11:17.200504 | orchestrator | Monday 10 February 2025 09:11:17 +0000 (0:00:01.859) 0:00:15.231 ******* 2025-02-10 09:11:18.097607 | orchestrator | changed: [testbed-manager] 2025-02-10 09:11:18.097966 | orchestrator | 2025-02-10 09:11:18.098006 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:11:18.098467 | orchestrator | 2025-02-10 09:11:18 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:11:18.098548 | orchestrator | 2025-02-10 09:11:18 | INFO  | Please wait and do not abort execution. 2025-02-10 09:11:18.099588 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:11:18.099997 | orchestrator | 2025-02-10 09:11:18.100206 | orchestrator | Monday 10 February 2025 09:11:18 +0000 (0:00:00.900) 0:00:16.132 ******* 2025-02-10 09:11:18.100672 | orchestrator | =============================================================================== 2025-02-10 09:11:18.101201 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.58s 2025-02-10 09:11:18.102060 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.86s 2025-02-10 09:11:18.102423 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.66s 2025-02-10 09:11:18.102736 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.32s 2025-02-10 09:11:18.103129 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.96s 2025-02-10 09:11:18.103574 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.90s 2025-02-10 09:11:18.103947 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.61s 2025-02-10 09:11:18.104773 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.60s 2025-02-10 09:11:18.105075 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.52s 2025-02-10 09:11:18.105105 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.50s 2025-02-10 09:11:18.105406 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.45s 2025-02-10 09:11:18.694485 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-02-10 09:11:18.733168 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-02-10 09:11:18.801842 | orchestrator | Dload Upload Total Spent Left Speed 2025-02-10 09:11:18.801989 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 217 0 --:--:-- --:--:-- --:--:-- 220 2025-02-10 09:11:18.814589 | orchestrator | + osism apply --environment custom workarounds 2025-02-10 09:11:20.212714 | orchestrator | 2025-02-10 09:11:20 | INFO  | Trying to run play workarounds in environment custom 2025-02-10 09:11:20.262257 | orchestrator | 2025-02-10 09:11:20 | INFO  | Task 62404e44-f9b4-4fcc-8369-d31bcc3dc8ce (workarounds) was prepared for execution. 2025-02-10 09:11:23.422177 | orchestrator | 2025-02-10 09:11:20 | INFO  | It takes a moment until task 62404e44-f9b4-4fcc-8369-d31bcc3dc8ce (workarounds) has been started and output is visible here. 2025-02-10 09:11:23.422334 | orchestrator | 2025-02-10 09:11:23.423414 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:11:23.423479 | orchestrator | 2025-02-10 09:11:23.423627 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-02-10 09:11:23.423651 | orchestrator | Monday 10 February 2025 09:11:23 +0000 (0:00:00.140) 0:00:00.140 ******* 2025-02-10 09:11:23.586294 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-02-10 09:11:23.668087 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-02-10 09:11:23.752389 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-02-10 09:11:23.835050 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-02-10 09:11:23.923029 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-02-10 09:11:24.187487 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-02-10 09:11:24.187785 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-02-10 09:11:24.188754 | orchestrator | 2025-02-10 09:11:24.189568 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-02-10 09:11:24.190771 | orchestrator | 2025-02-10 09:11:24.191581 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-02-10 09:11:24.192170 | orchestrator | Monday 10 February 2025 09:11:24 +0000 (0:00:00.768) 0:00:00.908 ******* 2025-02-10 09:11:26.951050 | orchestrator | ok: [testbed-manager] 2025-02-10 09:11:26.952396 | orchestrator | 2025-02-10 09:11:26.952451 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-02-10 09:11:26.956081 | orchestrator | 2025-02-10 09:11:26.956645 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-02-10 09:11:26.957966 | orchestrator | Monday 10 February 2025 09:11:26 +0000 (0:00:02.762) 0:00:03.670 ******* 2025-02-10 09:11:28.822751 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:11:28.826754 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:11:28.827880 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:11:28.827931 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:11:28.830765 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:11:28.831568 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:11:28.832270 | orchestrator | 2025-02-10 09:11:28.833093 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-02-10 09:11:28.833658 | orchestrator | 2025-02-10 09:11:28.834391 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-02-10 09:11:28.835034 | orchestrator | Monday 10 February 2025 09:11:28 +0000 (0:00:01.868) 0:00:05.539 ******* 2025-02-10 09:11:30.366304 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-02-10 09:11:30.366613 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-02-10 09:11:30.366641 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-02-10 09:11:30.366659 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-02-10 09:11:30.367676 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-02-10 09:11:30.368337 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-02-10 09:11:30.371583 | orchestrator | 2025-02-10 09:11:34.398141 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-02-10 09:11:34.398295 | orchestrator | Monday 10 February 2025 09:11:30 +0000 (0:00:01.545) 0:00:07.085 ******* 2025-02-10 09:11:34.398333 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:11:34.400863 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:11:34.400900 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:11:34.400922 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:11:34.402186 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:11:34.404151 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:11:34.404505 | orchestrator | 2025-02-10 09:11:34.405110 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-02-10 09:11:34.406467 | orchestrator | Monday 10 February 2025 09:11:34 +0000 (0:00:04.032) 0:00:11.117 ******* 2025-02-10 09:11:34.556296 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:11:34.637632 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:11:34.720717 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:11:34.955001 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:11:35.117543 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:11:35.118136 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:11:35.119749 | orchestrator | 2025-02-10 09:11:35.121675 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-02-10 09:11:35.122897 | orchestrator | 2025-02-10 09:11:35.124451 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-02-10 09:11:35.125064 | orchestrator | Monday 10 February 2025 09:11:35 +0000 (0:00:00.713) 0:00:11.830 ******* 2025-02-10 09:11:36.907411 | orchestrator | changed: [testbed-manager] 2025-02-10 09:11:36.907728 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:11:36.907760 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:11:36.907807 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:11:36.908371 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:11:36.909494 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:11:36.911809 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:11:36.912776 | orchestrator | 2025-02-10 09:11:36.913832 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-02-10 09:11:36.914786 | orchestrator | Monday 10 February 2025 09:11:36 +0000 (0:00:01.795) 0:00:13.626 ******* 2025-02-10 09:11:38.610939 | orchestrator | changed: [testbed-manager] 2025-02-10 09:11:38.611838 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:11:38.611866 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:11:38.611884 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:11:38.612053 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:11:38.612225 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:11:38.612740 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:11:38.613824 | orchestrator | 2025-02-10 09:11:38.614382 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-02-10 09:11:38.615433 | orchestrator | Monday 10 February 2025 09:11:38 +0000 (0:00:01.701) 0:00:15.327 ******* 2025-02-10 09:11:40.115681 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:11:40.119999 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:11:40.120118 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:11:40.121910 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:11:40.123151 | orchestrator | ok: [testbed-manager] 2025-02-10 09:11:40.123912 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:11:40.124851 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:11:40.125828 | orchestrator | 2025-02-10 09:11:40.126528 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-02-10 09:11:40.127307 | orchestrator | Monday 10 February 2025 09:11:40 +0000 (0:00:01.508) 0:00:16.836 ******* 2025-02-10 09:11:41.949457 | orchestrator | changed: [testbed-manager] 2025-02-10 09:11:41.950534 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:11:41.951051 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:11:41.954927 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:11:41.955312 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:11:41.955753 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:11:41.956143 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:11:41.957542 | orchestrator | 2025-02-10 09:11:41.957948 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-02-10 09:11:41.958622 | orchestrator | Monday 10 February 2025 09:11:41 +0000 (0:00:01.834) 0:00:18.670 ******* 2025-02-10 09:11:42.104083 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:11:42.183894 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:11:42.260475 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:11:42.332429 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:11:42.577863 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:11:42.731544 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:11:42.733109 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:11:42.734957 | orchestrator | 2025-02-10 09:11:42.736064 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-02-10 09:11:42.736871 | orchestrator | 2025-02-10 09:11:42.737479 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-02-10 09:11:42.738335 | orchestrator | Monday 10 February 2025 09:11:42 +0000 (0:00:00.783) 0:00:19.454 ******* 2025-02-10 09:11:45.372082 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:11:45.372318 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:11:45.372373 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:11:45.372405 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:11:45.373544 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:11:45.377874 | orchestrator | ok: [testbed-manager] 2025-02-10 09:11:45.378442 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:11:45.378473 | orchestrator | 2025-02-10 09:11:45.381131 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:11:45.381177 | orchestrator | 2025-02-10 09:11:45 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:11:45.382550 | orchestrator | 2025-02-10 09:11:45 | INFO  | Please wait and do not abort execution. 2025-02-10 09:11:45.382613 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-10 09:11:45.384851 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:11:45.384941 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:11:45.384960 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:11:45.384978 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:11:45.385469 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:11:45.385902 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:11:45.386364 | orchestrator | 2025-02-10 09:11:45.386585 | orchestrator | Monday 10 February 2025 09:11:45 +0000 (0:00:02.636) 0:00:22.090 ******* 2025-02-10 09:11:45.387314 | orchestrator | =============================================================================== 2025-02-10 09:11:45.387814 | orchestrator | Run update-ca-certificates ---------------------------------------------- 4.03s 2025-02-10 09:11:45.388044 | orchestrator | Apply netplan configuration --------------------------------------------- 2.76s 2025-02-10 09:11:45.390807 | orchestrator | Install python3-docker -------------------------------------------------- 2.64s 2025-02-10 09:11:45.390918 | orchestrator | Apply netplan configuration --------------------------------------------- 1.87s 2025-02-10 09:11:45.390937 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.83s 2025-02-10 09:11:45.390955 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.80s 2025-02-10 09:11:45.391667 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.70s 2025-02-10 09:11:45.392116 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.55s 2025-02-10 09:11:45.392715 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.51s 2025-02-10 09:11:45.393845 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.78s 2025-02-10 09:11:45.395734 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.77s 2025-02-10 09:11:45.397247 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.71s 2025-02-10 09:11:45.916254 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-02-10 09:11:47.412367 | orchestrator | 2025-02-10 09:11:47 | INFO  | Task c9ec8b1c-0628-43c6-a260-3e80d6ea32dc (reboot) was prepared for execution. 2025-02-10 09:11:50.488688 | orchestrator | 2025-02-10 09:11:47 | INFO  | It takes a moment until task c9ec8b1c-0628-43c6-a260-3e80d6ea32dc (reboot) has been started and output is visible here. 2025-02-10 09:11:50.488856 | orchestrator | 2025-02-10 09:11:50.491189 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-02-10 09:11:50.491398 | orchestrator | 2025-02-10 09:11:50.492223 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-02-10 09:11:50.492419 | orchestrator | Monday 10 February 2025 09:11:50 +0000 (0:00:00.154) 0:00:00.154 ******* 2025-02-10 09:11:50.582772 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:11:50.583637 | orchestrator | 2025-02-10 09:11:50.584399 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-02-10 09:11:50.585083 | orchestrator | Monday 10 February 2025 09:11:50 +0000 (0:00:00.096) 0:00:00.251 ******* 2025-02-10 09:11:51.580312 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:11:51.581380 | orchestrator | 2025-02-10 09:11:51.583467 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-02-10 09:11:51.583510 | orchestrator | Monday 10 February 2025 09:11:51 +0000 (0:00:00.998) 0:00:01.249 ******* 2025-02-10 09:11:51.699248 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:11:51.699503 | orchestrator | 2025-02-10 09:11:51.700443 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-02-10 09:11:51.701420 | orchestrator | 2025-02-10 09:11:51.702006 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-02-10 09:11:51.702752 | orchestrator | Monday 10 February 2025 09:11:51 +0000 (0:00:00.120) 0:00:01.369 ******* 2025-02-10 09:11:51.795906 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:11:51.796242 | orchestrator | 2025-02-10 09:11:51.797376 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-02-10 09:11:51.798132 | orchestrator | Monday 10 February 2025 09:11:51 +0000 (0:00:00.097) 0:00:01.466 ******* 2025-02-10 09:11:52.443880 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:11:52.444261 | orchestrator | 2025-02-10 09:11:52.445611 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-02-10 09:11:52.447713 | orchestrator | Monday 10 February 2025 09:11:52 +0000 (0:00:00.646) 0:00:02.112 ******* 2025-02-10 09:11:52.560554 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:11:52.561252 | orchestrator | 2025-02-10 09:11:52.562418 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-02-10 09:11:52.563593 | orchestrator | 2025-02-10 09:11:52.563833 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-02-10 09:11:52.565216 | orchestrator | Monday 10 February 2025 09:11:52 +0000 (0:00:00.115) 0:00:02.227 ******* 2025-02-10 09:11:52.671791 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:11:52.672553 | orchestrator | 2025-02-10 09:11:52.673596 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-02-10 09:11:52.674473 | orchestrator | Monday 10 February 2025 09:11:52 +0000 (0:00:00.113) 0:00:02.341 ******* 2025-02-10 09:11:53.455840 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:11:53.456262 | orchestrator | 2025-02-10 09:11:53.457540 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-02-10 09:11:53.458601 | orchestrator | Monday 10 February 2025 09:11:53 +0000 (0:00:00.782) 0:00:03.124 ******* 2025-02-10 09:11:53.586531 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:11:53.586774 | orchestrator | 2025-02-10 09:11:53.588281 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-02-10 09:11:53.588539 | orchestrator | 2025-02-10 09:11:53.588960 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-02-10 09:11:53.589445 | orchestrator | Monday 10 February 2025 09:11:53 +0000 (0:00:00.130) 0:00:03.255 ******* 2025-02-10 09:11:53.697530 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:11:53.698990 | orchestrator | 2025-02-10 09:11:53.699079 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-02-10 09:11:53.699133 | orchestrator | Monday 10 February 2025 09:11:53 +0000 (0:00:00.110) 0:00:03.365 ******* 2025-02-10 09:11:54.352057 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:11:54.352512 | orchestrator | 2025-02-10 09:11:54.353558 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-02-10 09:11:54.354497 | orchestrator | Monday 10 February 2025 09:11:54 +0000 (0:00:00.655) 0:00:04.021 ******* 2025-02-10 09:11:54.469283 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:11:54.470207 | orchestrator | 2025-02-10 09:11:54.472757 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-02-10 09:11:54.473375 | orchestrator | 2025-02-10 09:11:54.475274 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-02-10 09:11:54.475973 | orchestrator | Monday 10 February 2025 09:11:54 +0000 (0:00:00.115) 0:00:04.137 ******* 2025-02-10 09:11:54.578795 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:11:54.579022 | orchestrator | 2025-02-10 09:11:54.579061 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-02-10 09:11:54.579091 | orchestrator | Monday 10 February 2025 09:11:54 +0000 (0:00:00.110) 0:00:04.248 ******* 2025-02-10 09:11:55.225578 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:11:55.225919 | orchestrator | 2025-02-10 09:11:55.227235 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-02-10 09:11:55.227317 | orchestrator | Monday 10 February 2025 09:11:55 +0000 (0:00:00.645) 0:00:04.893 ******* 2025-02-10 09:11:55.335863 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:11:55.336920 | orchestrator | 2025-02-10 09:11:55.337681 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-02-10 09:11:55.341018 | orchestrator | 2025-02-10 09:11:55.341173 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-02-10 09:11:55.341204 | orchestrator | Monday 10 February 2025 09:11:55 +0000 (0:00:00.112) 0:00:05.005 ******* 2025-02-10 09:11:55.440776 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:11:55.440923 | orchestrator | 2025-02-10 09:11:55.440943 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-02-10 09:11:55.443214 | orchestrator | Monday 10 February 2025 09:11:55 +0000 (0:00:00.103) 0:00:05.109 ******* 2025-02-10 09:11:56.113881 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:11:56.114287 | orchestrator | 2025-02-10 09:11:56.144309 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-02-10 09:11:56.144536 | orchestrator | Monday 10 February 2025 09:11:56 +0000 (0:00:00.673) 0:00:05.783 ******* 2025-02-10 09:11:56.144590 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:11:56.144666 | orchestrator | 2025-02-10 09:11:56.144911 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:11:56.145777 | orchestrator | 2025-02-10 09:11:56 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:11:56.146118 | orchestrator | 2025-02-10 09:11:56 | INFO  | Please wait and do not abort execution. 2025-02-10 09:11:56.146156 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:11:56.146820 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:11:56.147290 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:11:56.148068 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:11:56.148384 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:11:56.149093 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:11:56.149431 | orchestrator | 2025-02-10 09:11:56.149924 | orchestrator | Monday 10 February 2025 09:11:56 +0000 (0:00:00.031) 0:00:05.814 ******* 2025-02-10 09:11:56.150383 | orchestrator | =============================================================================== 2025-02-10 09:11:56.150786 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.40s 2025-02-10 09:11:56.151199 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.63s 2025-02-10 09:11:56.151603 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.63s 2025-02-10 09:11:56.747949 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-02-10 09:11:58.262717 | orchestrator | 2025-02-10 09:11:58 | INFO  | Task 968670c4-054b-4493-a82b-67a38ffa4ca2 (wait-for-connection) was prepared for execution. 2025-02-10 09:12:01.556557 | orchestrator | 2025-02-10 09:11:58 | INFO  | It takes a moment until task 968670c4-054b-4493-a82b-67a38ffa4ca2 (wait-for-connection) has been started and output is visible here. 2025-02-10 09:12:01.556739 | orchestrator | 2025-02-10 09:12:01.560208 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-02-10 09:12:14.151031 | orchestrator | 2025-02-10 09:12:14.151197 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-02-10 09:12:14.151218 | orchestrator | Monday 10 February 2025 09:12:01 +0000 (0:00:00.207) 0:00:00.207 ******* 2025-02-10 09:12:14.151253 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:12:14.151787 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:12:14.151817 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:12:14.151833 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:12:14.151849 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:12:14.151864 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:12:14.151885 | orchestrator | 2025-02-10 09:12:14.152542 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:12:14.152837 | orchestrator | 2025-02-10 09:12:14 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:12:14.152944 | orchestrator | 2025-02-10 09:12:14 | INFO  | Please wait and do not abort execution. 2025-02-10 09:12:14.153471 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:12:14.157520 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:12:14.157621 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:12:14.157643 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:12:14.157659 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:12:14.157674 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:12:14.157689 | orchestrator | 2025-02-10 09:12:14.157704 | orchestrator | Monday 10 February 2025 09:12:14 +0000 (0:00:12.588) 0:00:12.796 ******* 2025-02-10 09:12:14.157766 | orchestrator | =============================================================================== 2025-02-10 09:12:14.158126 | orchestrator | Wait until remote system is reachable ---------------------------------- 12.59s 2025-02-10 09:12:14.659599 | orchestrator | + osism apply hddtemp 2025-02-10 09:12:16.189754 | orchestrator | 2025-02-10 09:12:16 | INFO  | Task 1364ac02-b120-4745-b59c-436ab46cc8fa (hddtemp) was prepared for execution. 2025-02-10 09:12:19.363444 | orchestrator | 2025-02-10 09:12:16 | INFO  | It takes a moment until task 1364ac02-b120-4745-b59c-436ab46cc8fa (hddtemp) has been started and output is visible here. 2025-02-10 09:12:19.363637 | orchestrator | 2025-02-10 09:12:19.363782 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-02-10 09:12:19.363812 | orchestrator | 2025-02-10 09:12:19.364659 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-02-10 09:12:19.367072 | orchestrator | Monday 10 February 2025 09:12:19 +0000 (0:00:00.200) 0:00:00.200 ******* 2025-02-10 09:12:19.521981 | orchestrator | ok: [testbed-manager] 2025-02-10 09:12:19.597504 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:12:19.685318 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:12:19.762070 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:12:19.838575 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:12:20.091944 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:12:20.092133 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:12:20.093159 | orchestrator | 2025-02-10 09:12:20.094943 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-02-10 09:12:21.302130 | orchestrator | Monday 10 February 2025 09:12:20 +0000 (0:00:00.727) 0:00:00.928 ******* 2025-02-10 09:12:21.302427 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:12:21.302530 | orchestrator | 2025-02-10 09:12:21.302995 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-02-10 09:12:21.303535 | orchestrator | Monday 10 February 2025 09:12:21 +0000 (0:00:01.209) 0:00:02.137 ******* 2025-02-10 09:12:23.252394 | orchestrator | ok: [testbed-manager] 2025-02-10 09:12:23.253505 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:12:23.253544 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:12:23.256139 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:12:23.256781 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:12:23.256834 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:12:23.257447 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:12:23.258128 | orchestrator | 2025-02-10 09:12:23.258987 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-02-10 09:12:23.259412 | orchestrator | Monday 10 February 2025 09:12:23 +0000 (0:00:01.948) 0:00:04.085 ******* 2025-02-10 09:12:23.851005 | orchestrator | changed: [testbed-manager] 2025-02-10 09:12:23.944631 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:12:24.379018 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:12:24.380186 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:12:24.380521 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:12:24.381264 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:12:24.381984 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:12:24.382898 | orchestrator | 2025-02-10 09:12:24.383422 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-02-10 09:12:24.383866 | orchestrator | Monday 10 February 2025 09:12:24 +0000 (0:00:01.126) 0:00:05.212 ******* 2025-02-10 09:12:25.719793 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:12:25.720990 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:12:25.721035 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:12:25.721935 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:12:25.725089 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:12:25.725229 | orchestrator | ok: [testbed-manager] 2025-02-10 09:12:25.725253 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:12:25.725309 | orchestrator | 2025-02-10 09:12:25.726144 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-02-10 09:12:25.726785 | orchestrator | Monday 10 February 2025 09:12:25 +0000 (0:00:01.341) 0:00:06.554 ******* 2025-02-10 09:12:25.980560 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:12:26.060960 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:12:26.144714 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:12:26.228770 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:12:26.361856 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:12:26.365778 | orchestrator | changed: [testbed-manager] 2025-02-10 09:12:26.366713 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:12:26.366749 | orchestrator | 2025-02-10 09:12:26.366773 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-02-10 09:12:26.367096 | orchestrator | Monday 10 February 2025 09:12:26 +0000 (0:00:00.646) 0:00:07.200 ******* 2025-02-10 09:12:38.631665 | orchestrator | changed: [testbed-manager] 2025-02-10 09:12:38.632772 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:12:38.632822 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:12:38.633105 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:12:38.634267 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:12:38.634971 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:12:38.635980 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:12:38.636564 | orchestrator | 2025-02-10 09:12:38.637391 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-02-10 09:12:38.638290 | orchestrator | Monday 10 February 2025 09:12:38 +0000 (0:00:12.261) 0:00:19.461 ******* 2025-02-10 09:12:39.837303 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:12:39.841323 | orchestrator | 2025-02-10 09:12:39.845412 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-02-10 09:12:39.851795 | orchestrator | Monday 10 February 2025 09:12:39 +0000 (0:00:01.206) 0:00:20.668 ******* 2025-02-10 09:12:41.792810 | orchestrator | changed: [testbed-manager] 2025-02-10 09:12:41.793023 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:12:41.793716 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:12:41.794512 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:12:41.795970 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:12:41.796318 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:12:41.798189 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:12:41.799123 | orchestrator | 2025-02-10 09:12:41.799159 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:12:41.799466 | orchestrator | 2025-02-10 09:12:41 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:12:41.800083 | orchestrator | 2025-02-10 09:12:41 | INFO  | Please wait and do not abort execution. 2025-02-10 09:12:41.800543 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:12:41.801112 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-10 09:12:41.801478 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-10 09:12:41.802435 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-10 09:12:41.803020 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-10 09:12:41.803513 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-10 09:12:41.804201 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-10 09:12:41.804471 | orchestrator | 2025-02-10 09:12:41.805389 | orchestrator | Monday 10 February 2025 09:12:41 +0000 (0:00:01.961) 0:00:22.629 ******* 2025-02-10 09:12:41.805896 | orchestrator | =============================================================================== 2025-02-10 09:12:41.806427 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.26s 2025-02-10 09:12:41.806900 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.96s 2025-02-10 09:12:41.807320 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.95s 2025-02-10 09:12:41.807928 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.34s 2025-02-10 09:12:41.808430 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.21s 2025-02-10 09:12:41.809152 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.21s 2025-02-10 09:12:41.809647 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.13s 2025-02-10 09:12:41.809982 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.73s 2025-02-10 09:12:41.810426 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.65s 2025-02-10 09:12:42.395388 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-02-10 09:12:45.644950 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-02-10 09:12:45.680190 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-02-10 09:12:45.680308 | orchestrator | + local max_attempts=60 2025-02-10 09:12:45.680328 | orchestrator | + local name=ceph-ansible 2025-02-10 09:12:45.680385 | orchestrator | + local attempt_num=1 2025-02-10 09:12:45.680401 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-02-10 09:12:45.680435 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-02-10 09:12:45.680619 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-02-10 09:12:45.680643 | orchestrator | + local max_attempts=60 2025-02-10 09:12:45.680657 | orchestrator | + local name=kolla-ansible 2025-02-10 09:12:45.680672 | orchestrator | + local attempt_num=1 2025-02-10 09:12:45.680690 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-02-10 09:12:45.707871 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-02-10 09:12:45.708059 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-02-10 09:12:45.708107 | orchestrator | + local max_attempts=60 2025-02-10 09:12:45.708124 | orchestrator | + local name=osism-ansible 2025-02-10 09:12:45.708139 | orchestrator | + local attempt_num=1 2025-02-10 09:12:45.708160 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-02-10 09:12:45.736301 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-02-10 09:12:45.914468 | orchestrator | + [[ true == \t\r\u\e ]] 2025-02-10 09:12:45.914594 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-02-10 09:12:45.914627 | orchestrator | ARA in ceph-ansible already disabled. 2025-02-10 09:12:46.068002 | orchestrator | ARA in kolla-ansible already disabled. 2025-02-10 09:12:46.243986 | orchestrator | ARA in osism-ansible already disabled. 2025-02-10 09:12:46.422414 | orchestrator | ARA in osism-kubernetes already disabled. 2025-02-10 09:12:46.422559 | orchestrator | + osism apply gather-facts 2025-02-10 09:12:47.932777 | orchestrator | 2025-02-10 09:12:47 | INFO  | Task f2d8dea7-523f-4fe2-955c-39c805f6f0da (gather-facts) was prepared for execution. 2025-02-10 09:12:47.933660 | orchestrator | 2025-02-10 09:12:47 | INFO  | It takes a moment until task f2d8dea7-523f-4fe2-955c-39c805f6f0da (gather-facts) has been started and output is visible here. 2025-02-10 09:12:51.144625 | orchestrator | 2025-02-10 09:12:51.144857 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-02-10 09:12:51.144892 | orchestrator | 2025-02-10 09:12:51.147854 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-02-10 09:12:51.149371 | orchestrator | Monday 10 February 2025 09:12:51 +0000 (0:00:00.180) 0:00:00.180 ******* 2025-02-10 09:12:56.058557 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:12:56.058748 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:12:56.058772 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:12:56.058793 | orchestrator | ok: [testbed-manager] 2025-02-10 09:12:56.059094 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:12:56.060058 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:12:56.061016 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:12:56.061780 | orchestrator | 2025-02-10 09:12:56.062360 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-02-10 09:12:56.063136 | orchestrator | 2025-02-10 09:12:56.063790 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-02-10 09:12:56.064813 | orchestrator | Monday 10 February 2025 09:12:56 +0000 (0:00:04.916) 0:00:05.096 ******* 2025-02-10 09:12:56.218944 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:12:56.291330 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:12:56.390780 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:12:56.478596 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:12:56.563173 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:12:56.606540 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:12:56.606880 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:12:56.607197 | orchestrator | 2025-02-10 09:12:56.608132 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:12:56.609577 | orchestrator | 2025-02-10 09:12:56 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:12:56.609845 | orchestrator | 2025-02-10 09:12:56 | INFO  | Please wait and do not abort execution. 2025-02-10 09:12:56.609881 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-10 09:12:56.610152 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-10 09:12:56.610714 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-10 09:12:56.610954 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-10 09:12:56.611404 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-10 09:12:56.611841 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-10 09:12:56.612241 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-10 09:12:56.612469 | orchestrator | 2025-02-10 09:12:56.612838 | orchestrator | Monday 10 February 2025 09:12:56 +0000 (0:00:00.550) 0:00:05.646 ******* 2025-02-10 09:12:56.613050 | orchestrator | =============================================================================== 2025-02-10 09:12:56.613168 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.92s 2025-02-10 09:12:56.613546 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2025-02-10 09:12:57.215941 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-02-10 09:12:57.229163 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-02-10 09:12:57.241940 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-02-10 09:12:57.260931 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-02-10 09:12:57.280387 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-02-10 09:12:57.292926 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-02-10 09:12:57.304966 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-02-10 09:12:57.323867 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-02-10 09:12:57.346473 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-02-10 09:12:57.365329 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-02-10 09:12:57.380835 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-02-10 09:12:57.404930 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-02-10 09:12:57.423522 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-02-10 09:12:57.439616 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-02-10 09:12:57.453877 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-02-10 09:12:57.468550 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-02-10 09:12:57.481299 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-02-10 09:12:57.493824 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-02-10 09:12:57.509294 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-02-10 09:12:57.522967 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-02-10 09:12:57.533636 | orchestrator | + [[ false == \t\r\u\e ]] 2025-02-10 09:12:57.843768 | orchestrator | changed 2025-02-10 09:12:57.945530 | 2025-02-10 09:12:57.945680 | TASK [Deploy services] 2025-02-10 09:12:58.064694 | orchestrator | skipping: Conditional result was False 2025-02-10 09:12:58.084250 | 2025-02-10 09:12:58.084413 | TASK [Deploy in a nutshell] 2025-02-10 09:12:58.823606 | orchestrator | + set -e 2025-02-10 09:12:58.823945 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-02-10 09:12:58.823961 | orchestrator | ++ export INTERACTIVE=false 2025-02-10 09:12:58.823969 | orchestrator | ++ INTERACTIVE=false 2025-02-10 09:12:58.823992 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-02-10 09:12:58.823998 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-02-10 09:12:58.824004 | orchestrator | + source /opt/manager-vars.sh 2025-02-10 09:12:58.824011 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-02-10 09:12:58.824020 | orchestrator | ++ NUMBER_OF_NODES=6 2025-02-10 09:12:58.824026 | orchestrator | ++ export CEPH_VERSION=quincy 2025-02-10 09:12:58.824031 | orchestrator | ++ CEPH_VERSION=quincy 2025-02-10 09:12:58.824036 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-02-10 09:12:58.824041 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-02-10 09:12:58.824046 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-02-10 09:12:58.824051 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-02-10 09:12:58.824056 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-02-10 09:12:58.824061 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-02-10 09:12:58.824066 | orchestrator | ++ export ARA=false 2025-02-10 09:12:58.824071 | orchestrator | ++ ARA=false 2025-02-10 09:12:58.824076 | orchestrator | ++ export TEMPEST=false 2025-02-10 09:12:58.824080 | orchestrator | ++ TEMPEST=false 2025-02-10 09:12:58.824085 | orchestrator | ++ export IS_ZUUL=true 2025-02-10 09:12:58.824090 | orchestrator | ++ IS_ZUUL=true 2025-02-10 09:12:58.824098 | orchestrator | 2025-02-10 09:12:58.824675 | orchestrator | # PULL IMAGES 2025-02-10 09:12:58.824682 | orchestrator | 2025-02-10 09:12:58.824687 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.162 2025-02-10 09:12:58.824692 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.162 2025-02-10 09:12:58.824697 | orchestrator | ++ export EXTERNAL_API=false 2025-02-10 09:12:58.824702 | orchestrator | ++ EXTERNAL_API=false 2025-02-10 09:12:58.824707 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-02-10 09:12:58.824711 | orchestrator | ++ IMAGE_USER=ubuntu 2025-02-10 09:12:58.824720 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-02-10 09:12:58.824725 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-02-10 09:12:58.824730 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-02-10 09:12:58.824734 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-02-10 09:12:58.824739 | orchestrator | + echo 2025-02-10 09:12:58.824744 | orchestrator | + echo '# PULL IMAGES' 2025-02-10 09:12:58.824749 | orchestrator | + echo 2025-02-10 09:12:58.824756 | orchestrator | ++ semver 8.1.0 7.0.0 2025-02-10 09:12:58.880366 | orchestrator | + [[ 1 -ge 0 ]] 2025-02-10 09:13:00.343076 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-02-10 09:13:00.343215 | orchestrator | 2025-02-10 09:13:00 | INFO  | Trying to run play pull-images in environment custom 2025-02-10 09:13:00.391388 | orchestrator | 2025-02-10 09:13:00 | INFO  | Task 81907811-6361-4f82-a3a7-54b85fb702f6 (pull-images) was prepared for execution. 2025-02-10 09:13:03.547953 | orchestrator | 2025-02-10 09:13:00 | INFO  | It takes a moment until task 81907811-6361-4f82-a3a7-54b85fb702f6 (pull-images) has been started and output is visible here. 2025-02-10 09:13:03.548083 | orchestrator | 2025-02-10 09:13:03.548141 | orchestrator | PLAY [Pull images] ************************************************************* 2025-02-10 09:13:03.548160 | orchestrator | 2025-02-10 09:13:03.548178 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-02-10 09:13:03.551012 | orchestrator | Monday 10 February 2025 09:13:03 +0000 (0:00:00.152) 0:00:00.152 ******* 2025-02-10 09:13:43.575128 | orchestrator | changed: [testbed-manager] 2025-02-10 09:14:36.965959 | orchestrator | 2025-02-10 09:14:36.966232 | orchestrator | TASK [Pull other images] ******************************************************* 2025-02-10 09:14:36.966265 | orchestrator | Monday 10 February 2025 09:13:43 +0000 (0:00:40.026) 0:00:40.179 ******* 2025-02-10 09:14:36.966301 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-02-10 09:14:36.966738 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-02-10 09:14:36.966855 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-02-10 09:14:36.966882 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-02-10 09:14:36.966959 | orchestrator | changed: [testbed-manager] => (item=common) 2025-02-10 09:14:36.967108 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-02-10 09:14:36.967128 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-02-10 09:14:36.967146 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-02-10 09:14:36.967264 | orchestrator | changed: [testbed-manager] => (item=heat) 2025-02-10 09:14:36.967287 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-02-10 09:14:36.967458 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-02-10 09:14:36.968773 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-02-10 09:14:36.971491 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-02-10 09:14:36.971544 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-02-10 09:14:36.973057 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-02-10 09:14:36.973087 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-02-10 09:14:36.973096 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-02-10 09:14:36.973104 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-02-10 09:14:36.973119 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-02-10 09:14:36.973309 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-02-10 09:14:36.973327 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-02-10 09:14:36.973350 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-02-10 09:14:36.973361 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-02-10 09:14:36.973369 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-02-10 09:14:36.973376 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-02-10 09:14:36.973388 | orchestrator | 2025-02-10 09:14:36.973513 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:14:36.973784 | orchestrator | 2025-02-10 09:14:36 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:14:36.974366 | orchestrator | 2025-02-10 09:14:36 | INFO  | Please wait and do not abort execution. 2025-02-10 09:14:36.974399 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:14:36.974695 | orchestrator | 2025-02-10 09:14:36.975009 | orchestrator | Monday 10 February 2025 09:14:36 +0000 (0:00:53.392) 0:01:33.571 ******* 2025-02-10 09:14:36.975161 | orchestrator | =============================================================================== 2025-02-10 09:14:36.975507 | orchestrator | Pull other images ------------------------------------------------------ 53.39s 2025-02-10 09:14:36.976252 | orchestrator | Pull keystone image ---------------------------------------------------- 40.03s 2025-02-10 09:14:39.101978 | orchestrator | 2025-02-10 09:14:39 | INFO  | Trying to run play wipe-partitions in environment custom 2025-02-10 09:14:39.152686 | orchestrator | 2025-02-10 09:14:39 | INFO  | Task 55294266-b4d4-4dc0-9893-573c261399f5 (wipe-partitions) was prepared for execution. 2025-02-10 09:14:42.407972 | orchestrator | 2025-02-10 09:14:39 | INFO  | It takes a moment until task 55294266-b4d4-4dc0-9893-573c261399f5 (wipe-partitions) has been started and output is visible here. 2025-02-10 09:14:42.408137 | orchestrator | 2025-02-10 09:14:42.409026 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-02-10 09:14:42.409697 | orchestrator | 2025-02-10 09:14:42.412826 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-02-10 09:14:42.413758 | orchestrator | Monday 10 February 2025 09:14:42 +0000 (0:00:00.132) 0:00:00.132 ******* 2025-02-10 09:14:43.053063 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:14:43.053258 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:14:43.053867 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:14:43.054116 | orchestrator | 2025-02-10 09:14:43.054839 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-02-10 09:14:43.055570 | orchestrator | Monday 10 February 2025 09:14:43 +0000 (0:00:00.650) 0:00:00.782 ******* 2025-02-10 09:14:43.212221 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:14:43.314611 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:14:44.315576 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:14:44.315714 | orchestrator | 2025-02-10 09:14:44.315735 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-02-10 09:14:44.315754 | orchestrator | Monday 10 February 2025 09:14:43 +0000 (0:00:00.252) 0:00:01.035 ******* 2025-02-10 09:14:44.315786 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:14:44.317414 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:14:44.317475 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:14:44.317591 | orchestrator | 2025-02-10 09:14:44.317900 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-02-10 09:14:44.317931 | orchestrator | Monday 10 February 2025 09:14:44 +0000 (0:00:01.005) 0:00:02.040 ******* 2025-02-10 09:14:44.481987 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:14:44.586470 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:14:44.590162 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:14:44.590713 | orchestrator | 2025-02-10 09:14:44.590747 | orchestrator | TASK [Check device availability] *********************************************** 2025-02-10 09:14:44.590770 | orchestrator | Monday 10 February 2025 09:14:44 +0000 (0:00:00.274) 0:00:02.314 ******* 2025-02-10 09:14:45.819498 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-02-10 09:14:45.819684 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-02-10 09:14:45.819707 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-02-10 09:14:45.819732 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-02-10 09:14:45.822230 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-02-10 09:14:45.822586 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-02-10 09:14:45.822621 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-02-10 09:14:45.822836 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-02-10 09:14:45.823044 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-02-10 09:14:45.823377 | orchestrator | 2025-02-10 09:14:45.823545 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-02-10 09:14:45.824949 | orchestrator | Monday 10 February 2025 09:14:45 +0000 (0:00:01.232) 0:00:03.547 ******* 2025-02-10 09:14:47.171100 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-02-10 09:14:47.174659 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-02-10 09:14:47.176578 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-02-10 09:14:47.177833 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-02-10 09:14:47.179204 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-02-10 09:14:47.179990 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-02-10 09:14:47.181573 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-02-10 09:14:47.182423 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-02-10 09:14:47.183053 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-02-10 09:14:47.183723 | orchestrator | 2025-02-10 09:14:47.184193 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-02-10 09:14:47.189806 | orchestrator | Monday 10 February 2025 09:14:47 +0000 (0:00:01.351) 0:00:04.898 ******* 2025-02-10 09:14:50.173430 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-02-10 09:14:50.174979 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-02-10 09:14:50.175099 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-02-10 09:14:50.175164 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-02-10 09:14:50.175192 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-02-10 09:14:50.175221 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-02-10 09:14:50.175383 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-02-10 09:14:50.175484 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-02-10 09:14:50.175771 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-02-10 09:14:50.176124 | orchestrator | 2025-02-10 09:14:50.176269 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-02-10 09:14:50.179107 | orchestrator | Monday 10 February 2025 09:14:50 +0000 (0:00:02.995) 0:00:07.894 ******* 2025-02-10 09:14:50.805638 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:14:50.806618 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:14:50.806747 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:14:50.806773 | orchestrator | 2025-02-10 09:14:50.807471 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-02-10 09:14:50.808087 | orchestrator | Monday 10 February 2025 09:14:50 +0000 (0:00:00.641) 0:00:08.535 ******* 2025-02-10 09:14:51.475257 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:14:51.475417 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:14:51.475428 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:14:51.475436 | orchestrator | 2025-02-10 09:14:51.475657 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:14:51.479011 | orchestrator | 2025-02-10 09:14:51 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:14:51.479172 | orchestrator | 2025-02-10 09:14:51 | INFO  | Please wait and do not abort execution. 2025-02-10 09:14:51.479196 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:14:51.479407 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:14:51.479903 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:14:51.480390 | orchestrator | 2025-02-10 09:14:51.483957 | orchestrator | Monday 10 February 2025 09:14:51 +0000 (0:00:00.665) 0:00:09.201 ******* 2025-02-10 09:14:51.485572 | orchestrator | =============================================================================== 2025-02-10 09:14:51.485728 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 3.00s 2025-02-10 09:14:51.485757 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.35s 2025-02-10 09:14:51.488252 | orchestrator | Check device availability ----------------------------------------------- 1.23s 2025-02-10 09:14:51.488644 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 1.01s 2025-02-10 09:14:51.489308 | orchestrator | Request device events from the kernel ----------------------------------- 0.67s 2025-02-10 09:14:51.490108 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.65s 2025-02-10 09:14:51.496494 | orchestrator | Reload udev rules ------------------------------------------------------- 0.64s 2025-02-10 09:14:51.496894 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.27s 2025-02-10 09:14:51.497898 | orchestrator | Remove all rook related logical devices --------------------------------- 0.25s 2025-02-10 09:14:53.541089 | orchestrator | 2025-02-10 09:14:53 | INFO  | Task 9f490553-f933-4ccb-90db-d1414fa69f53 (facts) was prepared for execution. 2025-02-10 09:14:57.063241 | orchestrator | 2025-02-10 09:14:53 | INFO  | It takes a moment until task 9f490553-f933-4ccb-90db-d1414fa69f53 (facts) has been started and output is visible here. 2025-02-10 09:14:57.063469 | orchestrator | 2025-02-10 09:14:57.069008 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-02-10 09:14:57.077062 | orchestrator | 2025-02-10 09:14:57.077197 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-02-10 09:14:57.762206 | orchestrator | Monday 10 February 2025 09:14:57 +0000 (0:00:00.229) 0:00:00.229 ******* 2025-02-10 09:14:57.762392 | orchestrator | ok: [testbed-manager] 2025-02-10 09:14:58.339208 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:14:58.345851 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:14:58.345920 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:14:58.346945 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:14:58.352810 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:14:58.355196 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:14:58.358887 | orchestrator | 2025-02-10 09:14:58.359118 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-02-10 09:14:58.359550 | orchestrator | Monday 10 February 2025 09:14:58 +0000 (0:00:01.276) 0:00:01.506 ******* 2025-02-10 09:14:58.500116 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:14:58.580567 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:14:58.670962 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:14:58.767832 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:14:58.849674 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:14:59.654213 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:14:59.655267 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:14:59.657633 | orchestrator | 2025-02-10 09:14:59.658821 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-02-10 09:14:59.658866 | orchestrator | 2025-02-10 09:14:59.662150 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-02-10 09:14:59.663035 | orchestrator | Monday 10 February 2025 09:14:59 +0000 (0:00:01.318) 0:00:02.825 ******* 2025-02-10 09:15:04.259911 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:15:04.262082 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:15:04.262135 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:15:04.262151 | orchestrator | ok: [testbed-manager] 2025-02-10 09:15:04.262177 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:15:04.263467 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:15:04.264847 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:15:04.265878 | orchestrator | 2025-02-10 09:15:04.270194 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-02-10 09:15:04.270441 | orchestrator | 2025-02-10 09:15:04.270476 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-02-10 09:15:04.270719 | orchestrator | Monday 10 February 2025 09:15:04 +0000 (0:00:04.603) 0:00:07.428 ******* 2025-02-10 09:15:04.675457 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:15:04.773973 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:15:04.952601 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:15:05.039924 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:15:05.149007 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:05.187762 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:05.188744 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:15:05.189180 | orchestrator | 2025-02-10 09:15:05.189731 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:15:05.190510 | orchestrator | 2025-02-10 09:15:05 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:15:05.190708 | orchestrator | 2025-02-10 09:15:05 | INFO  | Please wait and do not abort execution. 2025-02-10 09:15:05.193880 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:15:05.194953 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:15:05.195566 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:15:05.197685 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:15:05.198129 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:15:05.198593 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:15:05.198940 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:15:05.199322 | orchestrator | 2025-02-10 09:15:05.200233 | orchestrator | Monday 10 February 2025 09:15:05 +0000 (0:00:00.931) 0:00:08.359 ******* 2025-02-10 09:15:05.200607 | orchestrator | =============================================================================== 2025-02-10 09:15:05.201422 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.60s 2025-02-10 09:15:05.201760 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.32s 2025-02-10 09:15:05.205955 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.28s 2025-02-10 09:15:05.206852 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.93s 2025-02-10 09:15:07.448142 | orchestrator | 2025-02-10 09:15:07 | INFO  | Task 146daf32-4af7-412e-9dfe-b8408b30d611 (ceph-configure-lvm-volumes) was prepared for execution. 2025-02-10 09:15:11.784796 | orchestrator | 2025-02-10 09:15:07 | INFO  | It takes a moment until task 146daf32-4af7-412e-9dfe-b8408b30d611 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-02-10 09:15:11.784963 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-02-10 09:15:12.548435 | orchestrator | 2025-02-10 09:15:12.552949 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-02-10 09:15:12.554008 | orchestrator | 2025-02-10 09:15:12.555625 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-02-10 09:15:12.556990 | orchestrator | Monday 10 February 2025 09:15:12 +0000 (0:00:00.655) 0:00:00.655 ******* 2025-02-10 09:15:12.855640 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-02-10 09:15:12.858129 | orchestrator | 2025-02-10 09:15:12.859221 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-02-10 09:15:12.859558 | orchestrator | Monday 10 February 2025 09:15:12 +0000 (0:00:00.314) 0:00:00.969 ******* 2025-02-10 09:15:13.160971 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:15:13.161508 | orchestrator | 2025-02-10 09:15:13.161547 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:13.748058 | orchestrator | Monday 10 February 2025 09:15:13 +0000 (0:00:00.298) 0:00:01.268 ******* 2025-02-10 09:15:13.748220 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-02-10 09:15:13.748305 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-02-10 09:15:13.749759 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-02-10 09:15:13.751954 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-02-10 09:15:13.752709 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-02-10 09:15:13.753481 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-02-10 09:15:13.754131 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-02-10 09:15:13.754584 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-02-10 09:15:13.755296 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-02-10 09:15:13.755901 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-02-10 09:15:13.756693 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-02-10 09:15:13.757438 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-02-10 09:15:13.757946 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-02-10 09:15:13.759070 | orchestrator | 2025-02-10 09:15:13.761887 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:13.764173 | orchestrator | Monday 10 February 2025 09:15:13 +0000 (0:00:00.594) 0:00:01.862 ******* 2025-02-10 09:15:14.013276 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:14.015543 | orchestrator | 2025-02-10 09:15:14.016842 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:14.016909 | orchestrator | Monday 10 February 2025 09:15:14 +0000 (0:00:00.262) 0:00:02.125 ******* 2025-02-10 09:15:14.247447 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:14.248847 | orchestrator | 2025-02-10 09:15:14.250231 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:14.250701 | orchestrator | Monday 10 February 2025 09:15:14 +0000 (0:00:00.233) 0:00:02.358 ******* 2025-02-10 09:15:14.489728 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:14.489900 | orchestrator | 2025-02-10 09:15:14.489923 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:14.489945 | orchestrator | Monday 10 February 2025 09:15:14 +0000 (0:00:00.244) 0:00:02.603 ******* 2025-02-10 09:15:14.689571 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:14.689758 | orchestrator | 2025-02-10 09:15:14.692919 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:14.693846 | orchestrator | Monday 10 February 2025 09:15:14 +0000 (0:00:00.200) 0:00:02.804 ******* 2025-02-10 09:15:14.922606 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:14.922882 | orchestrator | 2025-02-10 09:15:14.926422 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:14.928615 | orchestrator | Monday 10 February 2025 09:15:14 +0000 (0:00:00.231) 0:00:03.036 ******* 2025-02-10 09:15:15.192210 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:15.192781 | orchestrator | 2025-02-10 09:15:15.192817 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:15.195658 | orchestrator | Monday 10 February 2025 09:15:15 +0000 (0:00:00.269) 0:00:03.305 ******* 2025-02-10 09:15:15.466515 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:15.467817 | orchestrator | 2025-02-10 09:15:15.467876 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:15.757250 | orchestrator | Monday 10 February 2025 09:15:15 +0000 (0:00:00.274) 0:00:03.580 ******* 2025-02-10 09:15:15.757428 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:15.760037 | orchestrator | 2025-02-10 09:15:15.760069 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:16.646841 | orchestrator | Monday 10 February 2025 09:15:15 +0000 (0:00:00.289) 0:00:03.869 ******* 2025-02-10 09:15:16.647055 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8bdd934e-bb0e-4b1c-85b6-0f94f416d8b7) 2025-02-10 09:15:16.647144 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8bdd934e-bb0e-4b1c-85b6-0f94f416d8b7) 2025-02-10 09:15:16.647498 | orchestrator | 2025-02-10 09:15:16.647949 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:16.648466 | orchestrator | Monday 10 February 2025 09:15:16 +0000 (0:00:00.867) 0:00:04.736 ******* 2025-02-10 09:15:17.741606 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_094c1351-6c25-40a9-b10a-7f3d6a96f205) 2025-02-10 09:15:17.743109 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_094c1351-6c25-40a9-b10a-7f3d6a96f205) 2025-02-10 09:15:17.744662 | orchestrator | 2025-02-10 09:15:17.744718 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:17.744744 | orchestrator | Monday 10 February 2025 09:15:17 +0000 (0:00:01.113) 0:00:05.850 ******* 2025-02-10 09:15:18.648370 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_494ee814-0dd9-4f0f-8082-b266e2c53997) 2025-02-10 09:15:18.650751 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_494ee814-0dd9-4f0f-8082-b266e2c53997) 2025-02-10 09:15:18.654647 | orchestrator | 2025-02-10 09:15:18.655284 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:18.655330 | orchestrator | Monday 10 February 2025 09:15:18 +0000 (0:00:00.911) 0:00:06.761 ******* 2025-02-10 09:15:19.284949 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_086c202d-0ccf-4be9-aa6b-e4e971478b82) 2025-02-10 09:15:19.285217 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_086c202d-0ccf-4be9-aa6b-e4e971478b82) 2025-02-10 09:15:19.285247 | orchestrator | 2025-02-10 09:15:19.290862 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:19.290910 | orchestrator | Monday 10 February 2025 09:15:19 +0000 (0:00:00.635) 0:00:07.396 ******* 2025-02-10 09:15:19.623069 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-02-10 09:15:19.623405 | orchestrator | 2025-02-10 09:15:19.623443 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:19.624007 | orchestrator | Monday 10 February 2025 09:15:19 +0000 (0:00:00.342) 0:00:07.739 ******* 2025-02-10 09:15:20.152128 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-02-10 09:15:20.152406 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-02-10 09:15:20.152436 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-02-10 09:15:20.152452 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-02-10 09:15:20.152475 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-02-10 09:15:20.152956 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-02-10 09:15:20.154560 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-02-10 09:15:20.155734 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-02-10 09:15:20.155782 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-02-10 09:15:20.157776 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-02-10 09:15:20.159486 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-02-10 09:15:20.165104 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-02-10 09:15:20.165263 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-02-10 09:15:20.166686 | orchestrator | 2025-02-10 09:15:20.516692 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:20.516851 | orchestrator | Monday 10 February 2025 09:15:20 +0000 (0:00:00.523) 0:00:08.262 ******* 2025-02-10 09:15:20.516902 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:20.517564 | orchestrator | 2025-02-10 09:15:20.517614 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:20.517649 | orchestrator | Monday 10 February 2025 09:15:20 +0000 (0:00:00.363) 0:00:08.626 ******* 2025-02-10 09:15:20.741610 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:20.742535 | orchestrator | 2025-02-10 09:15:20.742593 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:20.742631 | orchestrator | Monday 10 February 2025 09:15:20 +0000 (0:00:00.229) 0:00:08.856 ******* 2025-02-10 09:15:20.929799 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:21.149493 | orchestrator | 2025-02-10 09:15:21.149623 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:21.149633 | orchestrator | Monday 10 February 2025 09:15:20 +0000 (0:00:00.188) 0:00:09.044 ******* 2025-02-10 09:15:21.149661 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:21.149693 | orchestrator | 2025-02-10 09:15:21.149700 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:21.149706 | orchestrator | Monday 10 February 2025 09:15:21 +0000 (0:00:00.221) 0:00:09.266 ******* 2025-02-10 09:15:21.620771 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:21.622868 | orchestrator | 2025-02-10 09:15:21.625127 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:21.802538 | orchestrator | Monday 10 February 2025 09:15:21 +0000 (0:00:00.470) 0:00:09.736 ******* 2025-02-10 09:15:21.802722 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:21.804957 | orchestrator | 2025-02-10 09:15:21.805016 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:21.805048 | orchestrator | Monday 10 February 2025 09:15:21 +0000 (0:00:00.180) 0:00:09.917 ******* 2025-02-10 09:15:22.008876 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:22.009076 | orchestrator | 2025-02-10 09:15:22.009106 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:22.011307 | orchestrator | Monday 10 February 2025 09:15:22 +0000 (0:00:00.206) 0:00:10.124 ******* 2025-02-10 09:15:22.238695 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:22.238940 | orchestrator | 2025-02-10 09:15:22.238972 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:22.239290 | orchestrator | Monday 10 February 2025 09:15:22 +0000 (0:00:00.231) 0:00:10.355 ******* 2025-02-10 09:15:22.991037 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-02-10 09:15:22.993211 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-02-10 09:15:22.993327 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-02-10 09:15:22.993396 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-02-10 09:15:22.995651 | orchestrator | 2025-02-10 09:15:22.997006 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:22.997073 | orchestrator | Monday 10 February 2025 09:15:22 +0000 (0:00:00.749) 0:00:11.105 ******* 2025-02-10 09:15:23.179962 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:23.180886 | orchestrator | 2025-02-10 09:15:23.180986 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:23.186074 | orchestrator | Monday 10 February 2025 09:15:23 +0000 (0:00:00.189) 0:00:11.295 ******* 2025-02-10 09:15:23.369800 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:23.370644 | orchestrator | 2025-02-10 09:15:23.370692 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:23.370717 | orchestrator | Monday 10 February 2025 09:15:23 +0000 (0:00:00.189) 0:00:11.484 ******* 2025-02-10 09:15:23.566497 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:23.570877 | orchestrator | 2025-02-10 09:15:23.570952 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:23.571782 | orchestrator | Monday 10 February 2025 09:15:23 +0000 (0:00:00.196) 0:00:11.681 ******* 2025-02-10 09:15:23.759631 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:23.761237 | orchestrator | 2025-02-10 09:15:23.763997 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-02-10 09:15:23.764076 | orchestrator | Monday 10 February 2025 09:15:23 +0000 (0:00:00.195) 0:00:11.876 ******* 2025-02-10 09:15:23.971944 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-02-10 09:15:23.973385 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-02-10 09:15:23.975999 | orchestrator | 2025-02-10 09:15:23.976610 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-02-10 09:15:23.976659 | orchestrator | Monday 10 February 2025 09:15:23 +0000 (0:00:00.209) 0:00:12.085 ******* 2025-02-10 09:15:24.123463 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:24.123716 | orchestrator | 2025-02-10 09:15:24.125091 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-02-10 09:15:24.125564 | orchestrator | Monday 10 February 2025 09:15:24 +0000 (0:00:00.152) 0:00:12.237 ******* 2025-02-10 09:15:24.509028 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:24.509933 | orchestrator | 2025-02-10 09:15:24.511486 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-02-10 09:15:24.797321 | orchestrator | Monday 10 February 2025 09:15:24 +0000 (0:00:00.382) 0:00:12.620 ******* 2025-02-10 09:15:24.797563 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:24.798431 | orchestrator | 2025-02-10 09:15:24.798989 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-02-10 09:15:24.802605 | orchestrator | Monday 10 February 2025 09:15:24 +0000 (0:00:00.287) 0:00:12.908 ******* 2025-02-10 09:15:25.105277 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:15:25.108463 | orchestrator | 2025-02-10 09:15:25.110368 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-02-10 09:15:25.110709 | orchestrator | Monday 10 February 2025 09:15:25 +0000 (0:00:00.311) 0:00:13.219 ******* 2025-02-10 09:15:25.386440 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '70e6c2b1-f69e-5685-9251-bc72a13d87ec'}}) 2025-02-10 09:15:25.386690 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f3b4a615-299b-50bf-af8e-26b6dc38e729'}}) 2025-02-10 09:15:25.388940 | orchestrator | 2025-02-10 09:15:25.389217 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-02-10 09:15:25.389799 | orchestrator | Monday 10 February 2025 09:15:25 +0000 (0:00:00.282) 0:00:13.501 ******* 2025-02-10 09:15:25.689639 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '70e6c2b1-f69e-5685-9251-bc72a13d87ec'}})  2025-02-10 09:15:25.691049 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f3b4a615-299b-50bf-af8e-26b6dc38e729'}})  2025-02-10 09:15:25.691773 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:25.692112 | orchestrator | 2025-02-10 09:15:25.692557 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-02-10 09:15:25.693472 | orchestrator | Monday 10 February 2025 09:15:25 +0000 (0:00:00.302) 0:00:13.804 ******* 2025-02-10 09:15:25.864169 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '70e6c2b1-f69e-5685-9251-bc72a13d87ec'}})  2025-02-10 09:15:25.865131 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f3b4a615-299b-50bf-af8e-26b6dc38e729'}})  2025-02-10 09:15:25.868608 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:26.041057 | orchestrator | 2025-02-10 09:15:26.041194 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-02-10 09:15:26.041213 | orchestrator | Monday 10 February 2025 09:15:25 +0000 (0:00:00.175) 0:00:13.980 ******* 2025-02-10 09:15:26.041243 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '70e6c2b1-f69e-5685-9251-bc72a13d87ec'}})  2025-02-10 09:15:26.041957 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f3b4a615-299b-50bf-af8e-26b6dc38e729'}})  2025-02-10 09:15:26.043591 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:26.044127 | orchestrator | 2025-02-10 09:15:26.045397 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-02-10 09:15:26.046126 | orchestrator | Monday 10 February 2025 09:15:26 +0000 (0:00:00.169) 0:00:14.149 ******* 2025-02-10 09:15:26.216461 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:15:26.218430 | orchestrator | 2025-02-10 09:15:26.221572 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-02-10 09:15:26.222155 | orchestrator | Monday 10 February 2025 09:15:26 +0000 (0:00:00.183) 0:00:14.333 ******* 2025-02-10 09:15:26.371274 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:15:26.371682 | orchestrator | 2025-02-10 09:15:26.371882 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-02-10 09:15:26.508046 | orchestrator | Monday 10 February 2025 09:15:26 +0000 (0:00:00.154) 0:00:14.487 ******* 2025-02-10 09:15:26.508184 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:26.508257 | orchestrator | 2025-02-10 09:15:26.508276 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-02-10 09:15:26.508497 | orchestrator | Monday 10 February 2025 09:15:26 +0000 (0:00:00.134) 0:00:14.622 ******* 2025-02-10 09:15:26.650427 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:26.650673 | orchestrator | 2025-02-10 09:15:26.650707 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-02-10 09:15:27.058300 | orchestrator | Monday 10 February 2025 09:15:26 +0000 (0:00:00.138) 0:00:14.760 ******* 2025-02-10 09:15:27.058526 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:27.058617 | orchestrator | 2025-02-10 09:15:27.058641 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-02-10 09:15:27.060998 | orchestrator | Monday 10 February 2025 09:15:27 +0000 (0:00:00.412) 0:00:15.172 ******* 2025-02-10 09:15:27.218115 | orchestrator | ok: [testbed-node-3] => { 2025-02-10 09:15:27.218295 | orchestrator |  "ceph_osd_devices": { 2025-02-10 09:15:27.218321 | orchestrator |  "sdb": { 2025-02-10 09:15:27.218402 | orchestrator |  "osd_lvm_uuid": "70e6c2b1-f69e-5685-9251-bc72a13d87ec" 2025-02-10 09:15:27.218662 | orchestrator |  }, 2025-02-10 09:15:27.220634 | orchestrator |  "sdc": { 2025-02-10 09:15:27.220841 | orchestrator |  "osd_lvm_uuid": "f3b4a615-299b-50bf-af8e-26b6dc38e729" 2025-02-10 09:15:27.224506 | orchestrator |  } 2025-02-10 09:15:27.224977 | orchestrator |  } 2025-02-10 09:15:27.225557 | orchestrator | } 2025-02-10 09:15:27.226183 | orchestrator | 2025-02-10 09:15:27.226213 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-02-10 09:15:27.226466 | orchestrator | Monday 10 February 2025 09:15:27 +0000 (0:00:00.160) 0:00:15.333 ******* 2025-02-10 09:15:27.386083 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:27.529871 | orchestrator | 2025-02-10 09:15:27.530005 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-02-10 09:15:27.530078 | orchestrator | Monday 10 February 2025 09:15:27 +0000 (0:00:00.166) 0:00:15.499 ******* 2025-02-10 09:15:27.530113 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:27.530227 | orchestrator | 2025-02-10 09:15:27.530508 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-02-10 09:15:27.532706 | orchestrator | Monday 10 February 2025 09:15:27 +0000 (0:00:00.146) 0:00:15.646 ******* 2025-02-10 09:15:27.677014 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:27.677910 | orchestrator | 2025-02-10 09:15:27.679689 | orchestrator | TASK [Print configuration data] ************************************************ 2025-02-10 09:15:27.679999 | orchestrator | Monday 10 February 2025 09:15:27 +0000 (0:00:00.146) 0:00:15.793 ******* 2025-02-10 09:15:28.001640 | orchestrator | changed: [testbed-node-3] => { 2025-02-10 09:15:28.004388 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-02-10 09:15:28.004601 | orchestrator |  "ceph_osd_devices": { 2025-02-10 09:15:28.005025 | orchestrator |  "sdb": { 2025-02-10 09:15:28.006772 | orchestrator |  "osd_lvm_uuid": "70e6c2b1-f69e-5685-9251-bc72a13d87ec" 2025-02-10 09:15:28.007285 | orchestrator |  }, 2025-02-10 09:15:28.008967 | orchestrator |  "sdc": { 2025-02-10 09:15:28.009108 | orchestrator |  "osd_lvm_uuid": "f3b4a615-299b-50bf-af8e-26b6dc38e729" 2025-02-10 09:15:28.009132 | orchestrator |  } 2025-02-10 09:15:28.009150 | orchestrator |  }, 2025-02-10 09:15:28.009166 | orchestrator |  "lvm_volumes": [ 2025-02-10 09:15:28.009188 | orchestrator |  { 2025-02-10 09:15:28.013538 | orchestrator |  "data": "osd-block-70e6c2b1-f69e-5685-9251-bc72a13d87ec", 2025-02-10 09:15:28.014844 | orchestrator |  "data_vg": "ceph-70e6c2b1-f69e-5685-9251-bc72a13d87ec" 2025-02-10 09:15:28.014888 | orchestrator |  }, 2025-02-10 09:15:28.014914 | orchestrator |  { 2025-02-10 09:15:28.016038 | orchestrator |  "data": "osd-block-f3b4a615-299b-50bf-af8e-26b6dc38e729", 2025-02-10 09:15:28.016609 | orchestrator |  "data_vg": "ceph-f3b4a615-299b-50bf-af8e-26b6dc38e729" 2025-02-10 09:15:28.017011 | orchestrator |  } 2025-02-10 09:15:28.017041 | orchestrator |  ] 2025-02-10 09:15:28.017640 | orchestrator |  } 2025-02-10 09:15:28.018279 | orchestrator | } 2025-02-10 09:15:28.018766 | orchestrator | 2025-02-10 09:15:28.019240 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-02-10 09:15:28.019955 | orchestrator | Monday 10 February 2025 09:15:27 +0000 (0:00:00.323) 0:00:16.116 ******* 2025-02-10 09:15:30.373511 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-02-10 09:15:30.627127 | orchestrator | 2025-02-10 09:15:30.627332 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-02-10 09:15:30.627402 | orchestrator | 2025-02-10 09:15:30.627428 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-02-10 09:15:30.627452 | orchestrator | Monday 10 February 2025 09:15:30 +0000 (0:00:02.372) 0:00:18.488 ******* 2025-02-10 09:15:30.627499 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-02-10 09:15:30.627628 | orchestrator | 2025-02-10 09:15:30.627667 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-02-10 09:15:30.846953 | orchestrator | Monday 10 February 2025 09:15:30 +0000 (0:00:00.254) 0:00:18.742 ******* 2025-02-10 09:15:30.847111 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:15:31.290183 | orchestrator | 2025-02-10 09:15:31.290295 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:31.290308 | orchestrator | Monday 10 February 2025 09:15:30 +0000 (0:00:00.218) 0:00:18.961 ******* 2025-02-10 09:15:31.290330 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-02-10 09:15:31.290409 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-02-10 09:15:31.291706 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-02-10 09:15:31.292097 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-02-10 09:15:31.294583 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-02-10 09:15:31.294726 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-02-10 09:15:31.296135 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-02-10 09:15:31.298411 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-02-10 09:15:31.298882 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-02-10 09:15:31.301686 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-02-10 09:15:31.301884 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-02-10 09:15:31.302204 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-02-10 09:15:31.302469 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-02-10 09:15:31.302741 | orchestrator | 2025-02-10 09:15:31.302937 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:31.303292 | orchestrator | Monday 10 February 2025 09:15:31 +0000 (0:00:00.446) 0:00:19.407 ******* 2025-02-10 09:15:31.481935 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:31.482541 | orchestrator | 2025-02-10 09:15:31.482656 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:31.483801 | orchestrator | Monday 10 February 2025 09:15:31 +0000 (0:00:00.189) 0:00:19.597 ******* 2025-02-10 09:15:31.668314 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:31.669837 | orchestrator | 2025-02-10 09:15:31.669875 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:31.669900 | orchestrator | Monday 10 February 2025 09:15:31 +0000 (0:00:00.186) 0:00:19.783 ******* 2025-02-10 09:15:31.844320 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:31.844969 | orchestrator | 2025-02-10 09:15:31.847260 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:31.847536 | orchestrator | Monday 10 February 2025 09:15:31 +0000 (0:00:00.175) 0:00:19.959 ******* 2025-02-10 09:15:32.317329 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:32.326646 | orchestrator | 2025-02-10 09:15:32.327781 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:32.329784 | orchestrator | Monday 10 February 2025 09:15:32 +0000 (0:00:00.474) 0:00:20.434 ******* 2025-02-10 09:15:32.560094 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:32.560607 | orchestrator | 2025-02-10 09:15:32.560642 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:32.560884 | orchestrator | Monday 10 February 2025 09:15:32 +0000 (0:00:00.238) 0:00:20.673 ******* 2025-02-10 09:15:32.929121 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:33.246592 | orchestrator | 2025-02-10 09:15:33.246703 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:33.246721 | orchestrator | Monday 10 February 2025 09:15:32 +0000 (0:00:00.365) 0:00:21.038 ******* 2025-02-10 09:15:33.246752 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:33.248582 | orchestrator | 2025-02-10 09:15:33.248714 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:33.249165 | orchestrator | Monday 10 February 2025 09:15:33 +0000 (0:00:00.317) 0:00:21.356 ******* 2025-02-10 09:15:33.573975 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:33.576785 | orchestrator | 2025-02-10 09:15:34.222447 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:34.222620 | orchestrator | Monday 10 February 2025 09:15:33 +0000 (0:00:00.330) 0:00:21.687 ******* 2025-02-10 09:15:34.222656 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4aefdc38-3054-474e-a34a-07d97ce8643d) 2025-02-10 09:15:34.224809 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4aefdc38-3054-474e-a34a-07d97ce8643d) 2025-02-10 09:15:34.225798 | orchestrator | 2025-02-10 09:15:34.225823 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:34.226508 | orchestrator | Monday 10 February 2025 09:15:34 +0000 (0:00:00.649) 0:00:22.336 ******* 2025-02-10 09:15:34.713503 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_103f3392-831d-4ee6-b0f0-d6be015816d3) 2025-02-10 09:15:34.713791 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_103f3392-831d-4ee6-b0f0-d6be015816d3) 2025-02-10 09:15:34.715844 | orchestrator | 2025-02-10 09:15:34.716334 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:34.716585 | orchestrator | Monday 10 February 2025 09:15:34 +0000 (0:00:00.490) 0:00:22.827 ******* 2025-02-10 09:15:35.572433 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_23794fae-2c08-458a-becf-a15050b8218b) 2025-02-10 09:15:35.574409 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_23794fae-2c08-458a-becf-a15050b8218b) 2025-02-10 09:15:35.574771 | orchestrator | 2025-02-10 09:15:35.574821 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:35.575003 | orchestrator | Monday 10 February 2025 09:15:35 +0000 (0:00:00.854) 0:00:23.681 ******* 2025-02-10 09:15:36.357123 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_492baa9f-f661-44dd-a3d2-70d79942748c) 2025-02-10 09:15:36.358412 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_492baa9f-f661-44dd-a3d2-70d79942748c) 2025-02-10 09:15:36.358544 | orchestrator | 2025-02-10 09:15:36.361387 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:36.923887 | orchestrator | Monday 10 February 2025 09:15:36 +0000 (0:00:00.789) 0:00:24.470 ******* 2025-02-10 09:15:36.924032 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-02-10 09:15:37.573228 | orchestrator | 2025-02-10 09:15:37.573421 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:37.573451 | orchestrator | Monday 10 February 2025 09:15:36 +0000 (0:00:00.567) 0:00:25.037 ******* 2025-02-10 09:15:37.573481 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-02-10 09:15:37.573653 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-02-10 09:15:37.573679 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-02-10 09:15:37.573994 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-02-10 09:15:37.574426 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-02-10 09:15:37.577740 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-02-10 09:15:37.578163 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-02-10 09:15:37.578194 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-02-10 09:15:37.578585 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-02-10 09:15:37.581659 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-02-10 09:15:37.774593 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-02-10 09:15:37.774721 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-02-10 09:15:37.774740 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-02-10 09:15:37.774755 | orchestrator | 2025-02-10 09:15:37.774770 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:37.774785 | orchestrator | Monday 10 February 2025 09:15:37 +0000 (0:00:00.650) 0:00:25.688 ******* 2025-02-10 09:15:37.774818 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:37.780455 | orchestrator | 2025-02-10 09:15:37.782417 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:37.782482 | orchestrator | Monday 10 February 2025 09:15:37 +0000 (0:00:00.198) 0:00:25.886 ******* 2025-02-10 09:15:38.000026 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:38.000690 | orchestrator | 2025-02-10 09:15:38.001752 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:38.002923 | orchestrator | Monday 10 February 2025 09:15:37 +0000 (0:00:00.228) 0:00:26.115 ******* 2025-02-10 09:15:38.234102 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:38.234662 | orchestrator | 2025-02-10 09:15:38.234725 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:38.235531 | orchestrator | Monday 10 February 2025 09:15:38 +0000 (0:00:00.233) 0:00:26.349 ******* 2025-02-10 09:15:38.459296 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:38.459516 | orchestrator | 2025-02-10 09:15:38.461526 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:38.670800 | orchestrator | Monday 10 February 2025 09:15:38 +0000 (0:00:00.223) 0:00:26.573 ******* 2025-02-10 09:15:38.671096 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:38.671230 | orchestrator | 2025-02-10 09:15:38.671262 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:38.869470 | orchestrator | Monday 10 February 2025 09:15:38 +0000 (0:00:00.212) 0:00:26.786 ******* 2025-02-10 09:15:38.869638 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:38.870628 | orchestrator | 2025-02-10 09:15:38.870700 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:39.122718 | orchestrator | Monday 10 February 2025 09:15:38 +0000 (0:00:00.196) 0:00:26.982 ******* 2025-02-10 09:15:39.122838 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:39.124491 | orchestrator | 2025-02-10 09:15:39.385248 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:39.385410 | orchestrator | Monday 10 February 2025 09:15:39 +0000 (0:00:00.254) 0:00:27.237 ******* 2025-02-10 09:15:39.385436 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:39.386063 | orchestrator | 2025-02-10 09:15:39.387764 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:39.388481 | orchestrator | Monday 10 February 2025 09:15:39 +0000 (0:00:00.263) 0:00:27.500 ******* 2025-02-10 09:15:40.568107 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-02-10 09:15:40.568731 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-02-10 09:15:40.571157 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-02-10 09:15:40.571331 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-02-10 09:15:40.571575 | orchestrator | 2025-02-10 09:15:40.573031 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:40.573484 | orchestrator | Monday 10 February 2025 09:15:40 +0000 (0:00:01.177) 0:00:28.677 ******* 2025-02-10 09:15:40.832188 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:40.837831 | orchestrator | 2025-02-10 09:15:40.839468 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:41.119536 | orchestrator | Monday 10 February 2025 09:15:40 +0000 (0:00:00.270) 0:00:28.948 ******* 2025-02-10 09:15:41.119731 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:41.119842 | orchestrator | 2025-02-10 09:15:41.120801 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:41.122463 | orchestrator | Monday 10 February 2025 09:15:41 +0000 (0:00:00.283) 0:00:29.231 ******* 2025-02-10 09:15:41.343353 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:41.343616 | orchestrator | 2025-02-10 09:15:41.344116 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:41.346608 | orchestrator | Monday 10 February 2025 09:15:41 +0000 (0:00:00.226) 0:00:29.458 ******* 2025-02-10 09:15:41.609072 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:41.611558 | orchestrator | 2025-02-10 09:15:41.614465 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-02-10 09:15:41.617720 | orchestrator | Monday 10 February 2025 09:15:41 +0000 (0:00:00.262) 0:00:29.721 ******* 2025-02-10 09:15:41.823054 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-02-10 09:15:41.824353 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-02-10 09:15:41.824569 | orchestrator | 2025-02-10 09:15:41.826667 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-02-10 09:15:41.826853 | orchestrator | Monday 10 February 2025 09:15:41 +0000 (0:00:00.212) 0:00:29.934 ******* 2025-02-10 09:15:42.021001 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:42.024709 | orchestrator | 2025-02-10 09:15:42.024758 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-02-10 09:15:42.025693 | orchestrator | Monday 10 February 2025 09:15:42 +0000 (0:00:00.193) 0:00:30.127 ******* 2025-02-10 09:15:42.217975 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:42.218297 | orchestrator | 2025-02-10 09:15:42.218668 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-02-10 09:15:42.218713 | orchestrator | Monday 10 February 2025 09:15:42 +0000 (0:00:00.200) 0:00:30.328 ******* 2025-02-10 09:15:42.382590 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:42.383791 | orchestrator | 2025-02-10 09:15:42.383848 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-02-10 09:15:42.387791 | orchestrator | Monday 10 February 2025 09:15:42 +0000 (0:00:00.168) 0:00:30.497 ******* 2025-02-10 09:15:42.547576 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:15:42.548523 | orchestrator | 2025-02-10 09:15:42.548584 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-02-10 09:15:42.548820 | orchestrator | Monday 10 February 2025 09:15:42 +0000 (0:00:00.164) 0:00:30.661 ******* 2025-02-10 09:15:42.794695 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5101bad7-da03-58be-8044-cbe4500fcec9'}}) 2025-02-10 09:15:42.795044 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd59ecc87-3940-56cd-881a-fbc914ec02de'}}) 2025-02-10 09:15:42.795164 | orchestrator | 2025-02-10 09:15:42.797804 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-02-10 09:15:42.798588 | orchestrator | Monday 10 February 2025 09:15:42 +0000 (0:00:00.244) 0:00:30.906 ******* 2025-02-10 09:15:43.264788 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5101bad7-da03-58be-8044-cbe4500fcec9'}})  2025-02-10 09:15:43.272089 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd59ecc87-3940-56cd-881a-fbc914ec02de'}})  2025-02-10 09:15:43.272848 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:43.274644 | orchestrator | 2025-02-10 09:15:43.274695 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-02-10 09:15:43.274720 | orchestrator | Monday 10 February 2025 09:15:43 +0000 (0:00:00.471) 0:00:31.377 ******* 2025-02-10 09:15:43.494250 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5101bad7-da03-58be-8044-cbe4500fcec9'}})  2025-02-10 09:15:43.496019 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd59ecc87-3940-56cd-881a-fbc914ec02de'}})  2025-02-10 09:15:43.496119 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:43.496576 | orchestrator | 2025-02-10 09:15:43.497620 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-02-10 09:15:43.500173 | orchestrator | Monday 10 February 2025 09:15:43 +0000 (0:00:00.228) 0:00:31.606 ******* 2025-02-10 09:15:43.657270 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5101bad7-da03-58be-8044-cbe4500fcec9'}})  2025-02-10 09:15:43.659470 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd59ecc87-3940-56cd-881a-fbc914ec02de'}})  2025-02-10 09:15:43.659745 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:43.660878 | orchestrator | 2025-02-10 09:15:43.666596 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-02-10 09:15:43.667332 | orchestrator | Monday 10 February 2025 09:15:43 +0000 (0:00:00.167) 0:00:31.774 ******* 2025-02-10 09:15:43.822554 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:15:43.825080 | orchestrator | 2025-02-10 09:15:43.826136 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-02-10 09:15:43.827170 | orchestrator | Monday 10 February 2025 09:15:43 +0000 (0:00:00.164) 0:00:31.938 ******* 2025-02-10 09:15:43.982105 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:15:43.982314 | orchestrator | 2025-02-10 09:15:43.983041 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-02-10 09:15:43.984075 | orchestrator | Monday 10 February 2025 09:15:43 +0000 (0:00:00.158) 0:00:32.097 ******* 2025-02-10 09:15:44.131778 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:44.132038 | orchestrator | 2025-02-10 09:15:44.132608 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-02-10 09:15:44.133304 | orchestrator | Monday 10 February 2025 09:15:44 +0000 (0:00:00.148) 0:00:32.245 ******* 2025-02-10 09:15:44.261832 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:44.262117 | orchestrator | 2025-02-10 09:15:44.263575 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-02-10 09:15:44.266610 | orchestrator | Monday 10 February 2025 09:15:44 +0000 (0:00:00.132) 0:00:32.378 ******* 2025-02-10 09:15:44.402162 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:44.403894 | orchestrator | 2025-02-10 09:15:44.406093 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-02-10 09:15:44.409153 | orchestrator | Monday 10 February 2025 09:15:44 +0000 (0:00:00.139) 0:00:32.517 ******* 2025-02-10 09:15:44.547988 | orchestrator | ok: [testbed-node-4] => { 2025-02-10 09:15:44.551253 | orchestrator |  "ceph_osd_devices": { 2025-02-10 09:15:44.553773 | orchestrator |  "sdb": { 2025-02-10 09:15:44.555110 | orchestrator |  "osd_lvm_uuid": "5101bad7-da03-58be-8044-cbe4500fcec9" 2025-02-10 09:15:44.555180 | orchestrator |  }, 2025-02-10 09:15:44.555866 | orchestrator |  "sdc": { 2025-02-10 09:15:44.556814 | orchestrator |  "osd_lvm_uuid": "d59ecc87-3940-56cd-881a-fbc914ec02de" 2025-02-10 09:15:44.557315 | orchestrator |  } 2025-02-10 09:15:44.557989 | orchestrator |  } 2025-02-10 09:15:44.558740 | orchestrator | } 2025-02-10 09:15:44.559290 | orchestrator | 2025-02-10 09:15:44.560552 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-02-10 09:15:44.561776 | orchestrator | Monday 10 February 2025 09:15:44 +0000 (0:00:00.145) 0:00:32.663 ******* 2025-02-10 09:15:44.704891 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:44.706180 | orchestrator | 2025-02-10 09:15:44.707725 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-02-10 09:15:44.708644 | orchestrator | Monday 10 February 2025 09:15:44 +0000 (0:00:00.154) 0:00:32.817 ******* 2025-02-10 09:15:44.884957 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:44.885279 | orchestrator | 2025-02-10 09:15:44.886441 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-02-10 09:15:44.887826 | orchestrator | Monday 10 February 2025 09:15:44 +0000 (0:00:00.177) 0:00:32.995 ******* 2025-02-10 09:15:45.045498 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:45.047307 | orchestrator | 2025-02-10 09:15:45.048708 | orchestrator | TASK [Print configuration data] ************************************************ 2025-02-10 09:15:45.049993 | orchestrator | Monday 10 February 2025 09:15:45 +0000 (0:00:00.165) 0:00:33.161 ******* 2025-02-10 09:15:45.543109 | orchestrator | changed: [testbed-node-4] => { 2025-02-10 09:15:45.544258 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-02-10 09:15:45.544338 | orchestrator |  "ceph_osd_devices": { 2025-02-10 09:15:45.547994 | orchestrator |  "sdb": { 2025-02-10 09:15:45.549112 | orchestrator |  "osd_lvm_uuid": "5101bad7-da03-58be-8044-cbe4500fcec9" 2025-02-10 09:15:45.550302 | orchestrator |  }, 2025-02-10 09:15:45.550932 | orchestrator |  "sdc": { 2025-02-10 09:15:45.551750 | orchestrator |  "osd_lvm_uuid": "d59ecc87-3940-56cd-881a-fbc914ec02de" 2025-02-10 09:15:45.552453 | orchestrator |  } 2025-02-10 09:15:45.553755 | orchestrator |  }, 2025-02-10 09:15:45.554218 | orchestrator |  "lvm_volumes": [ 2025-02-10 09:15:45.555042 | orchestrator |  { 2025-02-10 09:15:45.557645 | orchestrator |  "data": "osd-block-5101bad7-da03-58be-8044-cbe4500fcec9", 2025-02-10 09:15:45.558310 | orchestrator |  "data_vg": "ceph-5101bad7-da03-58be-8044-cbe4500fcec9" 2025-02-10 09:15:45.558898 | orchestrator |  }, 2025-02-10 09:15:45.559377 | orchestrator |  { 2025-02-10 09:15:45.559810 | orchestrator |  "data": "osd-block-d59ecc87-3940-56cd-881a-fbc914ec02de", 2025-02-10 09:15:45.560243 | orchestrator |  "data_vg": "ceph-d59ecc87-3940-56cd-881a-fbc914ec02de" 2025-02-10 09:15:45.562639 | orchestrator |  } 2025-02-10 09:15:45.563169 | orchestrator |  ] 2025-02-10 09:15:45.563407 | orchestrator |  } 2025-02-10 09:15:45.564342 | orchestrator | } 2025-02-10 09:15:45.564704 | orchestrator | 2025-02-10 09:15:45.566981 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-02-10 09:15:45.567165 | orchestrator | Monday 10 February 2025 09:15:45 +0000 (0:00:00.497) 0:00:33.658 ******* 2025-02-10 09:15:46.993427 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-02-10 09:15:46.993575 | orchestrator | 2025-02-10 09:15:46.994596 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-02-10 09:15:46.995917 | orchestrator | 2025-02-10 09:15:46.997061 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-02-10 09:15:46.997742 | orchestrator | Monday 10 February 2025 09:15:46 +0000 (0:00:01.447) 0:00:35.106 ******* 2025-02-10 09:15:47.261456 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-02-10 09:15:47.263259 | orchestrator | 2025-02-10 09:15:47.263769 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-02-10 09:15:47.265933 | orchestrator | Monday 10 February 2025 09:15:47 +0000 (0:00:00.270) 0:00:35.376 ******* 2025-02-10 09:15:47.997284 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:15:47.998405 | orchestrator | 2025-02-10 09:15:48.008728 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:48.501210 | orchestrator | Monday 10 February 2025 09:15:47 +0000 (0:00:00.730) 0:00:36.107 ******* 2025-02-10 09:15:48.501476 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-02-10 09:15:48.501589 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-02-10 09:15:48.501938 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-02-10 09:15:48.502747 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-02-10 09:15:48.503433 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-02-10 09:15:48.504315 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-02-10 09:15:48.504669 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-02-10 09:15:48.505534 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-02-10 09:15:48.505921 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-02-10 09:15:48.506276 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-02-10 09:15:48.506594 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-02-10 09:15:48.506844 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-02-10 09:15:48.507350 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-02-10 09:15:48.507471 | orchestrator | 2025-02-10 09:15:48.507895 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:48.508233 | orchestrator | Monday 10 February 2025 09:15:48 +0000 (0:00:00.509) 0:00:36.617 ******* 2025-02-10 09:15:48.726498 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:15:48.727728 | orchestrator | 2025-02-10 09:15:48.729039 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:48.729745 | orchestrator | Monday 10 February 2025 09:15:48 +0000 (0:00:00.222) 0:00:36.840 ******* 2025-02-10 09:15:48.947015 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:15:48.947196 | orchestrator | 2025-02-10 09:15:48.947223 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:48.947754 | orchestrator | Monday 10 February 2025 09:15:48 +0000 (0:00:00.222) 0:00:37.062 ******* 2025-02-10 09:15:49.166871 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:15:49.167281 | orchestrator | 2025-02-10 09:15:49.167713 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:49.168250 | orchestrator | Monday 10 February 2025 09:15:49 +0000 (0:00:00.221) 0:00:37.283 ******* 2025-02-10 09:15:49.383949 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:15:49.385059 | orchestrator | 2025-02-10 09:15:49.388974 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:49.392187 | orchestrator | Monday 10 February 2025 09:15:49 +0000 (0:00:00.214) 0:00:37.497 ******* 2025-02-10 09:15:49.595888 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:15:49.598159 | orchestrator | 2025-02-10 09:15:49.598905 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:49.601333 | orchestrator | Monday 10 February 2025 09:15:49 +0000 (0:00:00.211) 0:00:37.709 ******* 2025-02-10 09:15:49.800270 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:15:49.800555 | orchestrator | 2025-02-10 09:15:49.800999 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:49.801766 | orchestrator | Monday 10 February 2025 09:15:49 +0000 (0:00:00.205) 0:00:37.914 ******* 2025-02-10 09:15:50.008780 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:15:50.009307 | orchestrator | 2025-02-10 09:15:50.010948 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:50.012452 | orchestrator | Monday 10 February 2025 09:15:50 +0000 (0:00:00.208) 0:00:38.123 ******* 2025-02-10 09:15:50.208217 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:15:50.208448 | orchestrator | 2025-02-10 09:15:50.208874 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:50.211172 | orchestrator | Monday 10 February 2025 09:15:50 +0000 (0:00:00.199) 0:00:38.323 ******* 2025-02-10 09:15:50.900140 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7bb1a57e-a3aa-41a1-8378-2ba5a5124dde) 2025-02-10 09:15:50.901645 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7bb1a57e-a3aa-41a1-8378-2ba5a5124dde) 2025-02-10 09:15:50.902567 | orchestrator | 2025-02-10 09:15:50.904579 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:50.905882 | orchestrator | Monday 10 February 2025 09:15:50 +0000 (0:00:00.690) 0:00:39.013 ******* 2025-02-10 09:15:51.350463 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a31d8f91-c02a-4f65-9bd6-abd5e53b34f2) 2025-02-10 09:15:51.350679 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a31d8f91-c02a-4f65-9bd6-abd5e53b34f2) 2025-02-10 09:15:51.351253 | orchestrator | 2025-02-10 09:15:51.351815 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:51.354175 | orchestrator | Monday 10 February 2025 09:15:51 +0000 (0:00:00.452) 0:00:39.465 ******* 2025-02-10 09:15:51.790950 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_be832b54-23bf-4f17-8551-69f0e04b6625) 2025-02-10 09:15:51.792362 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_be832b54-23bf-4f17-8551-69f0e04b6625) 2025-02-10 09:15:52.227570 | orchestrator | 2025-02-10 09:15:52.227704 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:52.227724 | orchestrator | Monday 10 February 2025 09:15:51 +0000 (0:00:00.436) 0:00:39.902 ******* 2025-02-10 09:15:52.227755 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_809e68db-7594-4e4e-90c0-4a7ae6eb5d4d) 2025-02-10 09:15:52.228600 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_809e68db-7594-4e4e-90c0-4a7ae6eb5d4d) 2025-02-10 09:15:52.228940 | orchestrator | 2025-02-10 09:15:52.229421 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:52.229699 | orchestrator | Monday 10 February 2025 09:15:52 +0000 (0:00:00.438) 0:00:40.341 ******* 2025-02-10 09:15:52.588856 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-02-10 09:15:52.590970 | orchestrator | 2025-02-10 09:15:52.592142 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:52.592927 | orchestrator | Monday 10 February 2025 09:15:52 +0000 (0:00:00.362) 0:00:40.704 ******* 2025-02-10 09:15:52.985094 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-02-10 09:15:52.986852 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-02-10 09:15:52.986938 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-02-10 09:15:52.988074 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-02-10 09:15:52.989102 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-02-10 09:15:52.990340 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-02-10 09:15:52.990823 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-02-10 09:15:52.991234 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-02-10 09:15:52.991965 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-02-10 09:15:52.992745 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-02-10 09:15:52.993276 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-02-10 09:15:52.993409 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-02-10 09:15:52.993509 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-02-10 09:15:52.993931 | orchestrator | 2025-02-10 09:15:52.994586 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:52.995043 | orchestrator | Monday 10 February 2025 09:15:52 +0000 (0:00:00.392) 0:00:41.096 ******* 2025-02-10 09:15:53.190182 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:15:53.191833 | orchestrator | 2025-02-10 09:15:53.193862 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:53.418283 | orchestrator | Monday 10 February 2025 09:15:53 +0000 (0:00:00.208) 0:00:41.305 ******* 2025-02-10 09:15:53.418464 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:15:53.419308 | orchestrator | 2025-02-10 09:15:53.420098 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:53.423823 | orchestrator | Monday 10 February 2025 09:15:53 +0000 (0:00:00.228) 0:00:41.533 ******* 2025-02-10 09:15:53.623948 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:15:53.625021 | orchestrator | 2025-02-10 09:15:53.627641 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:53.628683 | orchestrator | Monday 10 February 2025 09:15:53 +0000 (0:00:00.204) 0:00:41.738 ******* 2025-02-10 09:15:54.274896 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:15:54.276138 | orchestrator | 2025-02-10 09:15:54.277295 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:54.277336 | orchestrator | Monday 10 February 2025 09:15:54 +0000 (0:00:00.650) 0:00:42.389 ******* 2025-02-10 09:15:54.515710 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:15:54.516502 | orchestrator | 2025-02-10 09:15:54.519537 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:54.723992 | orchestrator | Monday 10 February 2025 09:15:54 +0000 (0:00:00.241) 0:00:42.630 ******* 2025-02-10 09:15:54.724149 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:15:54.724768 | orchestrator | 2025-02-10 09:15:54.725490 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:54.727592 | orchestrator | Monday 10 February 2025 09:15:54 +0000 (0:00:00.208) 0:00:42.839 ******* 2025-02-10 09:15:54.969129 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:15:54.969328 | orchestrator | 2025-02-10 09:15:54.972790 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:54.973500 | orchestrator | Monday 10 February 2025 09:15:54 +0000 (0:00:00.244) 0:00:43.083 ******* 2025-02-10 09:15:55.176007 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:15:55.179468 | orchestrator | 2025-02-10 09:15:55.836242 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:55.836361 | orchestrator | Monday 10 February 2025 09:15:55 +0000 (0:00:00.205) 0:00:43.289 ******* 2025-02-10 09:15:55.836427 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-02-10 09:15:55.837571 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-02-10 09:15:55.840688 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-02-10 09:15:55.840841 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-02-10 09:15:55.840866 | orchestrator | 2025-02-10 09:15:55.841800 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:55.841834 | orchestrator | Monday 10 February 2025 09:15:55 +0000 (0:00:00.661) 0:00:43.951 ******* 2025-02-10 09:15:56.060857 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:15:56.061713 | orchestrator | 2025-02-10 09:15:56.063098 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:56.063898 | orchestrator | Monday 10 February 2025 09:15:56 +0000 (0:00:00.224) 0:00:44.175 ******* 2025-02-10 09:15:56.251776 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:15:56.251992 | orchestrator | 2025-02-10 09:15:56.252789 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:56.253488 | orchestrator | Monday 10 February 2025 09:15:56 +0000 (0:00:00.191) 0:00:44.367 ******* 2025-02-10 09:15:56.453047 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:15:56.454456 | orchestrator | 2025-02-10 09:15:56.455581 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:56.456507 | orchestrator | Monday 10 February 2025 09:15:56 +0000 (0:00:00.200) 0:00:44.568 ******* 2025-02-10 09:15:56.669837 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:15:56.670120 | orchestrator | 2025-02-10 09:15:56.670161 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-02-10 09:15:56.671013 | orchestrator | Monday 10 February 2025 09:15:56 +0000 (0:00:00.216) 0:00:44.785 ******* 2025-02-10 09:15:57.085490 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-02-10 09:15:57.086950 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-02-10 09:15:57.087179 | orchestrator | 2025-02-10 09:15:57.088129 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-02-10 09:15:57.089017 | orchestrator | Monday 10 February 2025 09:15:57 +0000 (0:00:00.411) 0:00:45.197 ******* 2025-02-10 09:15:57.218720 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:15:57.219821 | orchestrator | 2025-02-10 09:15:57.220653 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-02-10 09:15:57.221606 | orchestrator | Monday 10 February 2025 09:15:57 +0000 (0:00:00.137) 0:00:45.334 ******* 2025-02-10 09:15:57.358369 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:15:57.358625 | orchestrator | 2025-02-10 09:15:57.359400 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-02-10 09:15:57.359736 | orchestrator | Monday 10 February 2025 09:15:57 +0000 (0:00:00.138) 0:00:45.473 ******* 2025-02-10 09:15:57.509089 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:15:57.513670 | orchestrator | 2025-02-10 09:15:57.514261 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-02-10 09:15:57.514297 | orchestrator | Monday 10 February 2025 09:15:57 +0000 (0:00:00.151) 0:00:45.625 ******* 2025-02-10 09:15:57.661040 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:15:57.662233 | orchestrator | 2025-02-10 09:15:57.662823 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-02-10 09:15:57.664730 | orchestrator | Monday 10 February 2025 09:15:57 +0000 (0:00:00.151) 0:00:45.776 ******* 2025-02-10 09:15:57.852886 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '89c58721-f175-5d0e-8750-3436c1d71ced'}}) 2025-02-10 09:15:57.853922 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '989340a3-ac62-57b3-a342-92d58018bc1c'}}) 2025-02-10 09:15:57.854749 | orchestrator | 2025-02-10 09:15:57.855839 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-02-10 09:15:57.857048 | orchestrator | Monday 10 February 2025 09:15:57 +0000 (0:00:00.190) 0:00:45.966 ******* 2025-02-10 09:15:58.025275 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '89c58721-f175-5d0e-8750-3436c1d71ced'}})  2025-02-10 09:15:58.025494 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '989340a3-ac62-57b3-a342-92d58018bc1c'}})  2025-02-10 09:15:58.026474 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:15:58.027329 | orchestrator | 2025-02-10 09:15:58.027846 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-02-10 09:15:58.028657 | orchestrator | Monday 10 February 2025 09:15:58 +0000 (0:00:00.172) 0:00:46.139 ******* 2025-02-10 09:15:58.204173 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '89c58721-f175-5d0e-8750-3436c1d71ced'}})  2025-02-10 09:15:58.205361 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '989340a3-ac62-57b3-a342-92d58018bc1c'}})  2025-02-10 09:15:58.206466 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:15:58.207615 | orchestrator | 2025-02-10 09:15:58.208199 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-02-10 09:15:58.209064 | orchestrator | Monday 10 February 2025 09:15:58 +0000 (0:00:00.179) 0:00:46.319 ******* 2025-02-10 09:15:58.378177 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '89c58721-f175-5d0e-8750-3436c1d71ced'}})  2025-02-10 09:15:58.378673 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '989340a3-ac62-57b3-a342-92d58018bc1c'}})  2025-02-10 09:15:58.379473 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:15:58.382558 | orchestrator | 2025-02-10 09:15:58.515014 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-02-10 09:15:58.515143 | orchestrator | Monday 10 February 2025 09:15:58 +0000 (0:00:00.174) 0:00:46.493 ******* 2025-02-10 09:15:58.515176 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:15:58.515716 | orchestrator | 2025-02-10 09:15:58.516559 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-02-10 09:15:58.516995 | orchestrator | Monday 10 February 2025 09:15:58 +0000 (0:00:00.136) 0:00:46.630 ******* 2025-02-10 09:15:58.670897 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:15:58.671323 | orchestrator | 2025-02-10 09:15:58.673138 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-02-10 09:15:58.673770 | orchestrator | Monday 10 February 2025 09:15:58 +0000 (0:00:00.155) 0:00:46.785 ******* 2025-02-10 09:15:58.810481 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:15:58.811814 | orchestrator | 2025-02-10 09:15:58.812448 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-02-10 09:15:58.815075 | orchestrator | Monday 10 February 2025 09:15:58 +0000 (0:00:00.140) 0:00:46.926 ******* 2025-02-10 09:15:59.172752 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:15:59.173079 | orchestrator | 2025-02-10 09:15:59.173109 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-02-10 09:15:59.173129 | orchestrator | Monday 10 February 2025 09:15:59 +0000 (0:00:00.361) 0:00:47.287 ******* 2025-02-10 09:15:59.314261 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:15:59.315112 | orchestrator | 2025-02-10 09:15:59.315155 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-02-10 09:15:59.315976 | orchestrator | Monday 10 February 2025 09:15:59 +0000 (0:00:00.142) 0:00:47.429 ******* 2025-02-10 09:15:59.459103 | orchestrator | ok: [testbed-node-5] => { 2025-02-10 09:15:59.459370 | orchestrator |  "ceph_osd_devices": { 2025-02-10 09:15:59.460832 | orchestrator |  "sdb": { 2025-02-10 09:15:59.461285 | orchestrator |  "osd_lvm_uuid": "89c58721-f175-5d0e-8750-3436c1d71ced" 2025-02-10 09:15:59.462112 | orchestrator |  }, 2025-02-10 09:15:59.463182 | orchestrator |  "sdc": { 2025-02-10 09:15:59.464092 | orchestrator |  "osd_lvm_uuid": "989340a3-ac62-57b3-a342-92d58018bc1c" 2025-02-10 09:15:59.465060 | orchestrator |  } 2025-02-10 09:15:59.465437 | orchestrator |  } 2025-02-10 09:15:59.466718 | orchestrator | } 2025-02-10 09:15:59.467121 | orchestrator | 2025-02-10 09:15:59.467153 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-02-10 09:15:59.467836 | orchestrator | Monday 10 February 2025 09:15:59 +0000 (0:00:00.145) 0:00:47.574 ******* 2025-02-10 09:15:59.597925 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:15:59.598214 | orchestrator | 2025-02-10 09:15:59.599222 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-02-10 09:15:59.599961 | orchestrator | Monday 10 February 2025 09:15:59 +0000 (0:00:00.138) 0:00:47.713 ******* 2025-02-10 09:15:59.746259 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:15:59.747199 | orchestrator | 2025-02-10 09:15:59.748353 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-02-10 09:15:59.749162 | orchestrator | Monday 10 February 2025 09:15:59 +0000 (0:00:00.147) 0:00:47.861 ******* 2025-02-10 09:15:59.886620 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:15:59.886832 | orchestrator | 2025-02-10 09:15:59.887851 | orchestrator | TASK [Print configuration data] ************************************************ 2025-02-10 09:15:59.887891 | orchestrator | Monday 10 February 2025 09:15:59 +0000 (0:00:00.140) 0:00:48.002 ******* 2025-02-10 09:16:00.165684 | orchestrator | changed: [testbed-node-5] => { 2025-02-10 09:16:00.167512 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-02-10 09:16:00.168090 | orchestrator |  "ceph_osd_devices": { 2025-02-10 09:16:00.168131 | orchestrator |  "sdb": { 2025-02-10 09:16:00.168941 | orchestrator |  "osd_lvm_uuid": "89c58721-f175-5d0e-8750-3436c1d71ced" 2025-02-10 09:16:00.169625 | orchestrator |  }, 2025-02-10 09:16:00.169996 | orchestrator |  "sdc": { 2025-02-10 09:16:00.170899 | orchestrator |  "osd_lvm_uuid": "989340a3-ac62-57b3-a342-92d58018bc1c" 2025-02-10 09:16:00.171822 | orchestrator |  } 2025-02-10 09:16:00.172330 | orchestrator |  }, 2025-02-10 09:16:00.172360 | orchestrator |  "lvm_volumes": [ 2025-02-10 09:16:00.173042 | orchestrator |  { 2025-02-10 09:16:00.173719 | orchestrator |  "data": "osd-block-89c58721-f175-5d0e-8750-3436c1d71ced", 2025-02-10 09:16:00.174104 | orchestrator |  "data_vg": "ceph-89c58721-f175-5d0e-8750-3436c1d71ced" 2025-02-10 09:16:00.174999 | orchestrator |  }, 2025-02-10 09:16:00.175630 | orchestrator |  { 2025-02-10 09:16:00.176211 | orchestrator |  "data": "osd-block-989340a3-ac62-57b3-a342-92d58018bc1c", 2025-02-10 09:16:00.176238 | orchestrator |  "data_vg": "ceph-989340a3-ac62-57b3-a342-92d58018bc1c" 2025-02-10 09:16:00.176484 | orchestrator |  } 2025-02-10 09:16:00.176953 | orchestrator |  ] 2025-02-10 09:16:00.177819 | orchestrator |  } 2025-02-10 09:16:00.178203 | orchestrator | } 2025-02-10 09:16:00.178484 | orchestrator | 2025-02-10 09:16:00.178625 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-02-10 09:16:00.179002 | orchestrator | Monday 10 February 2025 09:16:00 +0000 (0:00:00.276) 0:00:48.279 ******* 2025-02-10 09:16:01.520503 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-02-10 09:16:01.520786 | orchestrator | 2025-02-10 09:16:01.522119 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:16:01.522207 | orchestrator | 2025-02-10 09:16:01 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:16:01.522472 | orchestrator | 2025-02-10 09:16:01 | INFO  | Please wait and do not abort execution. 2025-02-10 09:16:01.523169 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-02-10 09:16:01.523904 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-02-10 09:16:01.524647 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-02-10 09:16:01.525082 | orchestrator | 2025-02-10 09:16:01.526138 | orchestrator | 2025-02-10 09:16:01.526891 | orchestrator | 2025-02-10 09:16:01.527412 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:16:01.528343 | orchestrator | Monday 10 February 2025 09:16:01 +0000 (0:00:01.354) 0:00:49.633 ******* 2025-02-10 09:16:01.529251 | orchestrator | =============================================================================== 2025-02-10 09:16:01.530795 | orchestrator | Write configuration file ------------------------------------------------ 5.17s 2025-02-10 09:16:01.531796 | orchestrator | Add known partitions to the list of available block devices ------------- 1.57s 2025-02-10 09:16:01.532415 | orchestrator | Add known links to the list of available block devices ------------------ 1.55s 2025-02-10 09:16:01.533260 | orchestrator | Get initial list of available block devices ----------------------------- 1.25s 2025-02-10 09:16:01.534309 | orchestrator | Add known partitions to the list of available block devices ------------- 1.18s 2025-02-10 09:16:01.534462 | orchestrator | Add known links to the list of available block devices ------------------ 1.11s 2025-02-10 09:16:01.534901 | orchestrator | Print configuration data ------------------------------------------------ 1.10s 2025-02-10 09:16:01.535326 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.95s 2025-02-10 09:16:01.535713 | orchestrator | Add known links to the list of available block devices ------------------ 0.91s 2025-02-10 09:16:01.536137 | orchestrator | Add known links to the list of available block devices ------------------ 0.87s 2025-02-10 09:16:01.536719 | orchestrator | Add known links to the list of available block devices ------------------ 0.85s 2025-02-10 09:16:01.537030 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.84s 2025-02-10 09:16:01.537454 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.83s 2025-02-10 09:16:01.537795 | orchestrator | Add known links to the list of available block devices ------------------ 0.79s 2025-02-10 09:16:01.538131 | orchestrator | Add known partitions to the list of available block devices ------------- 0.75s 2025-02-10 09:16:01.538766 | orchestrator | Generate DB VG names ---------------------------------------------------- 0.72s 2025-02-10 09:16:01.538865 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.72s 2025-02-10 09:16:01.539696 | orchestrator | Set DB+WAL devices config data ------------------------------------------ 0.69s 2025-02-10 09:16:01.539923 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2025-02-10 09:16:01.540299 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2025-02-10 09:16:03.598163 | orchestrator | 2025-02-10 09:16:03 | INFO  | Task 74e087de-18e2-4460-ac80-f897a61934b2 is running in background. Output coming soon. 2025-02-10 09:16:51.051134 | orchestrator | 2025-02-10 09:16:42 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-02-10 09:16:52.757929 | orchestrator | 2025-02-10 09:16:42 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-02-10 09:16:52.758115 | orchestrator | 2025-02-10 09:16:42 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-02-10 09:16:52.758141 | orchestrator | 2025-02-10 09:16:42 | INFO  | Handling group overwrites in 99-overwrite 2025-02-10 09:16:52.758173 | orchestrator | 2025-02-10 09:16:42 | INFO  | Removing group ceph-mds from 50-ceph 2025-02-10 09:16:52.758207 | orchestrator | 2025-02-10 09:16:42 | INFO  | Removing group ceph-rgw from 50-ceph 2025-02-10 09:16:52.758222 | orchestrator | 2025-02-10 09:16:42 | INFO  | Removing group netbird:children from 50-infrastruture 2025-02-10 09:16:52.758238 | orchestrator | 2025-02-10 09:16:42 | INFO  | Removing group storage:children from 50-kolla 2025-02-10 09:16:52.758253 | orchestrator | 2025-02-10 09:16:42 | INFO  | Removing group frr:children from 60-generic 2025-02-10 09:16:52.758267 | orchestrator | 2025-02-10 09:16:42 | INFO  | Handling group overwrites in 20-roles 2025-02-10 09:16:52.758280 | orchestrator | 2025-02-10 09:16:42 | INFO  | Removing group k3s_node from 50-infrastruture 2025-02-10 09:16:52.758292 | orchestrator | 2025-02-10 09:16:43 | INFO  | File 20-netbox not found in /inventory.pre/ 2025-02-10 09:16:52.758306 | orchestrator | 2025-02-10 09:16:50 | INFO  | Writing /inventory/clustershell/ansible.yaml with clustershell groups 2025-02-10 09:16:52.758339 | orchestrator | 2025-02-10 09:16:52 | INFO  | Task 356451cd-678e-469a-a592-bb6da07e8e03 (ceph-create-lvm-devices) was prepared for execution. 2025-02-10 09:16:55.822962 | orchestrator | 2025-02-10 09:16:52 | INFO  | It takes a moment until task 356451cd-678e-469a-a592-bb6da07e8e03 (ceph-create-lvm-devices) has been started and output is visible here. 2025-02-10 09:16:55.823130 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-02-10 09:16:56.333779 | orchestrator | 2025-02-10 09:16:56.334844 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-02-10 09:16:56.334935 | orchestrator | 2025-02-10 09:16:56.335180 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-02-10 09:16:56.335763 | orchestrator | Monday 10 February 2025 09:16:56 +0000 (0:00:00.443) 0:00:00.443 ******* 2025-02-10 09:16:56.569486 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-02-10 09:16:56.569848 | orchestrator | 2025-02-10 09:16:56.571105 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-02-10 09:16:56.572544 | orchestrator | Monday 10 February 2025 09:16:56 +0000 (0:00:00.236) 0:00:00.679 ******* 2025-02-10 09:16:56.804089 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:16:56.805256 | orchestrator | 2025-02-10 09:16:56.806759 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:16:56.807422 | orchestrator | Monday 10 February 2025 09:16:56 +0000 (0:00:00.233) 0:00:00.913 ******* 2025-02-10 09:16:57.557162 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-02-10 09:16:57.558161 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-02-10 09:16:57.558211 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-02-10 09:16:57.559738 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-02-10 09:16:57.559802 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-02-10 09:16:57.560757 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-02-10 09:16:57.561055 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-02-10 09:16:57.561707 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-02-10 09:16:57.562305 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-02-10 09:16:57.563032 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-02-10 09:16:57.563465 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-02-10 09:16:57.563847 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-02-10 09:16:57.564109 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-02-10 09:16:57.564607 | orchestrator | 2025-02-10 09:16:57.564848 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:16:57.565134 | orchestrator | Monday 10 February 2025 09:16:57 +0000 (0:00:00.752) 0:00:01.665 ******* 2025-02-10 09:16:57.781773 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:16:57.782152 | orchestrator | 2025-02-10 09:16:57.783032 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:16:57.786262 | orchestrator | Monday 10 February 2025 09:16:57 +0000 (0:00:00.226) 0:00:01.892 ******* 2025-02-10 09:16:57.990123 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:16:57.991030 | orchestrator | 2025-02-10 09:16:57.992409 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:16:57.993310 | orchestrator | Monday 10 February 2025 09:16:57 +0000 (0:00:00.206) 0:00:02.098 ******* 2025-02-10 09:16:58.205648 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:16:58.208981 | orchestrator | 2025-02-10 09:16:58.209851 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:16:58.209921 | orchestrator | Monday 10 February 2025 09:16:58 +0000 (0:00:00.217) 0:00:02.316 ******* 2025-02-10 09:16:58.400521 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:16:58.401940 | orchestrator | 2025-02-10 09:16:58.401985 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:16:58.402285 | orchestrator | Monday 10 February 2025 09:16:58 +0000 (0:00:00.192) 0:00:02.509 ******* 2025-02-10 09:16:58.645476 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:16:58.646164 | orchestrator | 2025-02-10 09:16:58.646234 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:16:58.646318 | orchestrator | Monday 10 February 2025 09:16:58 +0000 (0:00:00.247) 0:00:02.756 ******* 2025-02-10 09:16:58.857191 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:16:58.857399 | orchestrator | 2025-02-10 09:16:58.857942 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:16:58.858477 | orchestrator | Monday 10 February 2025 09:16:58 +0000 (0:00:00.212) 0:00:02.969 ******* 2025-02-10 09:16:59.065031 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:16:59.066363 | orchestrator | 2025-02-10 09:16:59.068577 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:16:59.263254 | orchestrator | Monday 10 February 2025 09:16:59 +0000 (0:00:00.206) 0:00:03.175 ******* 2025-02-10 09:16:59.263423 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:16:59.263572 | orchestrator | 2025-02-10 09:16:59.266525 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:16:59.879098 | orchestrator | Monday 10 February 2025 09:16:59 +0000 (0:00:00.197) 0:00:03.373 ******* 2025-02-10 09:16:59.879255 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8bdd934e-bb0e-4b1c-85b6-0f94f416d8b7) 2025-02-10 09:16:59.879350 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8bdd934e-bb0e-4b1c-85b6-0f94f416d8b7) 2025-02-10 09:16:59.879815 | orchestrator | 2025-02-10 09:16:59.880896 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:16:59.882649 | orchestrator | Monday 10 February 2025 09:16:59 +0000 (0:00:00.616) 0:00:03.989 ******* 2025-02-10 09:17:00.659730 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_094c1351-6c25-40a9-b10a-7f3d6a96f205) 2025-02-10 09:17:00.661034 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_094c1351-6c25-40a9-b10a-7f3d6a96f205) 2025-02-10 09:17:00.661730 | orchestrator | 2025-02-10 09:17:00.664501 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:17:01.101985 | orchestrator | Monday 10 February 2025 09:17:00 +0000 (0:00:00.781) 0:00:04.770 ******* 2025-02-10 09:17:01.102225 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_494ee814-0dd9-4f0f-8082-b266e2c53997) 2025-02-10 09:17:01.102497 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_494ee814-0dd9-4f0f-8082-b266e2c53997) 2025-02-10 09:17:01.102537 | orchestrator | 2025-02-10 09:17:01.104074 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:17:01.104933 | orchestrator | Monday 10 February 2025 09:17:01 +0000 (0:00:00.438) 0:00:05.209 ******* 2025-02-10 09:17:01.578427 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_086c202d-0ccf-4be9-aa6b-e4e971478b82) 2025-02-10 09:17:01.579677 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_086c202d-0ccf-4be9-aa6b-e4e971478b82) 2025-02-10 09:17:01.580175 | orchestrator | 2025-02-10 09:17:01.580228 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:17:01.580876 | orchestrator | Monday 10 February 2025 09:17:01 +0000 (0:00:00.473) 0:00:05.682 ******* 2025-02-10 09:17:01.907507 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-02-10 09:17:01.908782 | orchestrator | 2025-02-10 09:17:01.911310 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:01.911898 | orchestrator | Monday 10 February 2025 09:17:01 +0000 (0:00:00.335) 0:00:06.018 ******* 2025-02-10 09:17:02.372194 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-02-10 09:17:02.372417 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-02-10 09:17:02.373248 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-02-10 09:17:02.374836 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-02-10 09:17:02.375119 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-02-10 09:17:02.375174 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-02-10 09:17:02.375844 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-02-10 09:17:02.376277 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-02-10 09:17:02.376876 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-02-10 09:17:02.377652 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-02-10 09:17:02.378105 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-02-10 09:17:02.378356 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-02-10 09:17:02.378642 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-02-10 09:17:02.379621 | orchestrator | 2025-02-10 09:17:02.379884 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:02.379914 | orchestrator | Monday 10 February 2025 09:17:02 +0000 (0:00:00.464) 0:00:06.482 ******* 2025-02-10 09:17:02.598399 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:02.599227 | orchestrator | 2025-02-10 09:17:02.599886 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:02.602879 | orchestrator | Monday 10 February 2025 09:17:02 +0000 (0:00:00.226) 0:00:06.709 ******* 2025-02-10 09:17:02.785005 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:02.786125 | orchestrator | 2025-02-10 09:17:02.786832 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:02.786936 | orchestrator | Monday 10 February 2025 09:17:02 +0000 (0:00:00.187) 0:00:06.896 ******* 2025-02-10 09:17:02.987535 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:02.988029 | orchestrator | 2025-02-10 09:17:02.988715 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:02.989424 | orchestrator | Monday 10 February 2025 09:17:02 +0000 (0:00:00.201) 0:00:07.097 ******* 2025-02-10 09:17:03.195078 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:03.195341 | orchestrator | 2025-02-10 09:17:03.196415 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:03.197347 | orchestrator | Monday 10 February 2025 09:17:03 +0000 (0:00:00.208) 0:00:07.306 ******* 2025-02-10 09:17:03.759398 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:03.760395 | orchestrator | 2025-02-10 09:17:03.761761 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:03.762734 | orchestrator | Monday 10 February 2025 09:17:03 +0000 (0:00:00.563) 0:00:07.869 ******* 2025-02-10 09:17:03.980962 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:03.981567 | orchestrator | 2025-02-10 09:17:03.984073 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:03.984696 | orchestrator | Monday 10 February 2025 09:17:03 +0000 (0:00:00.221) 0:00:08.090 ******* 2025-02-10 09:17:04.191887 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:04.192494 | orchestrator | 2025-02-10 09:17:04.192980 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:04.194161 | orchestrator | Monday 10 February 2025 09:17:04 +0000 (0:00:00.212) 0:00:08.302 ******* 2025-02-10 09:17:04.397153 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:04.397772 | orchestrator | 2025-02-10 09:17:04.398545 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:04.399491 | orchestrator | Monday 10 February 2025 09:17:04 +0000 (0:00:00.205) 0:00:08.508 ******* 2025-02-10 09:17:05.075172 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-02-10 09:17:05.075803 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-02-10 09:17:05.076766 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-02-10 09:17:05.076813 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-02-10 09:17:05.076828 | orchestrator | 2025-02-10 09:17:05.076851 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:05.292827 | orchestrator | Monday 10 February 2025 09:17:05 +0000 (0:00:00.677) 0:00:09.185 ******* 2025-02-10 09:17:05.293017 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:05.293400 | orchestrator | 2025-02-10 09:17:05.294968 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:05.295485 | orchestrator | Monday 10 February 2025 09:17:05 +0000 (0:00:00.218) 0:00:09.403 ******* 2025-02-10 09:17:05.498608 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:05.499681 | orchestrator | 2025-02-10 09:17:05.500807 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:05.503254 | orchestrator | Monday 10 February 2025 09:17:05 +0000 (0:00:00.205) 0:00:09.609 ******* 2025-02-10 09:17:05.698815 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:05.699042 | orchestrator | 2025-02-10 09:17:05.895568 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:05.895697 | orchestrator | Monday 10 February 2025 09:17:05 +0000 (0:00:00.201) 0:00:09.810 ******* 2025-02-10 09:17:05.895727 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:05.895859 | orchestrator | 2025-02-10 09:17:05.896657 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-02-10 09:17:05.897116 | orchestrator | Monday 10 February 2025 09:17:05 +0000 (0:00:00.196) 0:00:10.006 ******* 2025-02-10 09:17:06.043393 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:06.043585 | orchestrator | 2025-02-10 09:17:06.044330 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-02-10 09:17:06.044792 | orchestrator | Monday 10 February 2025 09:17:06 +0000 (0:00:00.146) 0:00:10.152 ******* 2025-02-10 09:17:06.251263 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '70e6c2b1-f69e-5685-9251-bc72a13d87ec'}}) 2025-02-10 09:17:06.251859 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f3b4a615-299b-50bf-af8e-26b6dc38e729'}}) 2025-02-10 09:17:06.251917 | orchestrator | 2025-02-10 09:17:06.252223 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-02-10 09:17:06.253040 | orchestrator | Monday 10 February 2025 09:17:06 +0000 (0:00:00.209) 0:00:10.362 ******* 2025-02-10 09:17:08.339898 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-70e6c2b1-f69e-5685-9251-bc72a13d87ec', 'data_vg': 'ceph-70e6c2b1-f69e-5685-9251-bc72a13d87ec'}) 2025-02-10 09:17:08.340285 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f3b4a615-299b-50bf-af8e-26b6dc38e729', 'data_vg': 'ceph-f3b4a615-299b-50bf-af8e-26b6dc38e729'}) 2025-02-10 09:17:08.341342 | orchestrator | 2025-02-10 09:17:08.342659 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-02-10 09:17:08.343284 | orchestrator | Monday 10 February 2025 09:17:08 +0000 (0:00:02.087) 0:00:12.449 ******* 2025-02-10 09:17:08.517920 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-70e6c2b1-f69e-5685-9251-bc72a13d87ec', 'data_vg': 'ceph-70e6c2b1-f69e-5685-9251-bc72a13d87ec'})  2025-02-10 09:17:08.518149 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f3b4a615-299b-50bf-af8e-26b6dc38e729', 'data_vg': 'ceph-f3b4a615-299b-50bf-af8e-26b6dc38e729'})  2025-02-10 09:17:08.518699 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:08.519056 | orchestrator | 2025-02-10 09:17:08.520113 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-02-10 09:17:08.520902 | orchestrator | Monday 10 February 2025 09:17:08 +0000 (0:00:00.180) 0:00:12.630 ******* 2025-02-10 09:17:09.834848 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-70e6c2b1-f69e-5685-9251-bc72a13d87ec', 'data_vg': 'ceph-70e6c2b1-f69e-5685-9251-bc72a13d87ec'}) 2025-02-10 09:17:09.836561 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f3b4a615-299b-50bf-af8e-26b6dc38e729', 'data_vg': 'ceph-f3b4a615-299b-50bf-af8e-26b6dc38e729'}) 2025-02-10 09:17:09.837185 | orchestrator | 2025-02-10 09:17:09.837864 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-02-10 09:17:09.838080 | orchestrator | Monday 10 February 2025 09:17:09 +0000 (0:00:01.315) 0:00:13.945 ******* 2025-02-10 09:17:09.988692 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-70e6c2b1-f69e-5685-9251-bc72a13d87ec', 'data_vg': 'ceph-70e6c2b1-f69e-5685-9251-bc72a13d87ec'})  2025-02-10 09:17:09.989314 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f3b4a615-299b-50bf-af8e-26b6dc38e729', 'data_vg': 'ceph-f3b4a615-299b-50bf-af8e-26b6dc38e729'})  2025-02-10 09:17:09.991903 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:09.993206 | orchestrator | 2025-02-10 09:17:09.994003 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-02-10 09:17:09.994678 | orchestrator | Monday 10 February 2025 09:17:09 +0000 (0:00:00.155) 0:00:14.100 ******* 2025-02-10 09:17:10.134919 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:10.136013 | orchestrator | 2025-02-10 09:17:10.136720 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-02-10 09:17:10.138125 | orchestrator | Monday 10 February 2025 09:17:10 +0000 (0:00:00.144) 0:00:14.245 ******* 2025-02-10 09:17:10.285777 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-70e6c2b1-f69e-5685-9251-bc72a13d87ec', 'data_vg': 'ceph-70e6c2b1-f69e-5685-9251-bc72a13d87ec'})  2025-02-10 09:17:10.286743 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f3b4a615-299b-50bf-af8e-26b6dc38e729', 'data_vg': 'ceph-f3b4a615-299b-50bf-af8e-26b6dc38e729'})  2025-02-10 09:17:10.286973 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:10.287410 | orchestrator | 2025-02-10 09:17:10.287906 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-02-10 09:17:10.288490 | orchestrator | Monday 10 February 2025 09:17:10 +0000 (0:00:00.152) 0:00:14.397 ******* 2025-02-10 09:17:10.411694 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:10.411910 | orchestrator | 2025-02-10 09:17:10.413164 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-02-10 09:17:10.413897 | orchestrator | Monday 10 February 2025 09:17:10 +0000 (0:00:00.125) 0:00:14.523 ******* 2025-02-10 09:17:10.566415 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-70e6c2b1-f69e-5685-9251-bc72a13d87ec', 'data_vg': 'ceph-70e6c2b1-f69e-5685-9251-bc72a13d87ec'})  2025-02-10 09:17:10.567489 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f3b4a615-299b-50bf-af8e-26b6dc38e729', 'data_vg': 'ceph-f3b4a615-299b-50bf-af8e-26b6dc38e729'})  2025-02-10 09:17:10.568280 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:10.569215 | orchestrator | 2025-02-10 09:17:10.570202 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-02-10 09:17:10.570730 | orchestrator | Monday 10 February 2025 09:17:10 +0000 (0:00:00.154) 0:00:14.677 ******* 2025-02-10 09:17:10.712779 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:10.713486 | orchestrator | 2025-02-10 09:17:10.714478 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-02-10 09:17:10.715137 | orchestrator | Monday 10 February 2025 09:17:10 +0000 (0:00:00.146) 0:00:14.824 ******* 2025-02-10 09:17:10.970643 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-70e6c2b1-f69e-5685-9251-bc72a13d87ec', 'data_vg': 'ceph-70e6c2b1-f69e-5685-9251-bc72a13d87ec'})  2025-02-10 09:17:10.971293 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f3b4a615-299b-50bf-af8e-26b6dc38e729', 'data_vg': 'ceph-f3b4a615-299b-50bf-af8e-26b6dc38e729'})  2025-02-10 09:17:10.971620 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:10.972481 | orchestrator | 2025-02-10 09:17:10.973182 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-02-10 09:17:10.973938 | orchestrator | Monday 10 February 2025 09:17:10 +0000 (0:00:00.258) 0:00:15.082 ******* 2025-02-10 09:17:11.104747 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:17:11.259299 | orchestrator | 2025-02-10 09:17:11.259436 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-02-10 09:17:11.259479 | orchestrator | Monday 10 February 2025 09:17:11 +0000 (0:00:00.133) 0:00:15.216 ******* 2025-02-10 09:17:11.259513 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-70e6c2b1-f69e-5685-9251-bc72a13d87ec', 'data_vg': 'ceph-70e6c2b1-f69e-5685-9251-bc72a13d87ec'})  2025-02-10 09:17:11.260097 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f3b4a615-299b-50bf-af8e-26b6dc38e729', 'data_vg': 'ceph-f3b4a615-299b-50bf-af8e-26b6dc38e729'})  2025-02-10 09:17:11.260597 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:11.261208 | orchestrator | 2025-02-10 09:17:11.261716 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-02-10 09:17:11.262208 | orchestrator | Monday 10 February 2025 09:17:11 +0000 (0:00:00.152) 0:00:15.369 ******* 2025-02-10 09:17:11.423469 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-70e6c2b1-f69e-5685-9251-bc72a13d87ec', 'data_vg': 'ceph-70e6c2b1-f69e-5685-9251-bc72a13d87ec'})  2025-02-10 09:17:11.425122 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f3b4a615-299b-50bf-af8e-26b6dc38e729', 'data_vg': 'ceph-f3b4a615-299b-50bf-af8e-26b6dc38e729'})  2025-02-10 09:17:11.425728 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:11.426082 | orchestrator | 2025-02-10 09:17:11.426960 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-02-10 09:17:11.427267 | orchestrator | Monday 10 February 2025 09:17:11 +0000 (0:00:00.165) 0:00:15.534 ******* 2025-02-10 09:17:11.602001 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-70e6c2b1-f69e-5685-9251-bc72a13d87ec', 'data_vg': 'ceph-70e6c2b1-f69e-5685-9251-bc72a13d87ec'})  2025-02-10 09:17:11.603298 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f3b4a615-299b-50bf-af8e-26b6dc38e729', 'data_vg': 'ceph-f3b4a615-299b-50bf-af8e-26b6dc38e729'})  2025-02-10 09:17:11.603358 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:11.604429 | orchestrator | 2025-02-10 09:17:11.605475 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-02-10 09:17:11.606132 | orchestrator | Monday 10 February 2025 09:17:11 +0000 (0:00:00.178) 0:00:15.712 ******* 2025-02-10 09:17:11.734181 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:11.735132 | orchestrator | 2025-02-10 09:17:11.735866 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-02-10 09:17:11.736990 | orchestrator | Monday 10 February 2025 09:17:11 +0000 (0:00:00.132) 0:00:15.845 ******* 2025-02-10 09:17:11.889387 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:11.890107 | orchestrator | 2025-02-10 09:17:11.890925 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-02-10 09:17:11.892075 | orchestrator | Monday 10 February 2025 09:17:11 +0000 (0:00:00.155) 0:00:16.000 ******* 2025-02-10 09:17:12.037857 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:12.038540 | orchestrator | 2025-02-10 09:17:12.196619 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-02-10 09:17:12.196809 | orchestrator | Monday 10 February 2025 09:17:12 +0000 (0:00:00.145) 0:00:16.146 ******* 2025-02-10 09:17:12.196866 | orchestrator | ok: [testbed-node-3] => { 2025-02-10 09:17:12.197819 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-02-10 09:17:12.199065 | orchestrator | } 2025-02-10 09:17:12.199776 | orchestrator | 2025-02-10 09:17:12.200278 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-02-10 09:17:12.200803 | orchestrator | Monday 10 February 2025 09:17:12 +0000 (0:00:00.160) 0:00:16.307 ******* 2025-02-10 09:17:12.338481 | orchestrator | ok: [testbed-node-3] => { 2025-02-10 09:17:12.339215 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-02-10 09:17:12.340071 | orchestrator | } 2025-02-10 09:17:12.341208 | orchestrator | 2025-02-10 09:17:12.343545 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-02-10 09:17:12.343960 | orchestrator | Monday 10 February 2025 09:17:12 +0000 (0:00:00.142) 0:00:16.449 ******* 2025-02-10 09:17:12.479864 | orchestrator | ok: [testbed-node-3] => { 2025-02-10 09:17:12.480085 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-02-10 09:17:12.480174 | orchestrator | } 2025-02-10 09:17:12.481140 | orchestrator | 2025-02-10 09:17:12.482334 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-02-10 09:17:12.483029 | orchestrator | Monday 10 February 2025 09:17:12 +0000 (0:00:00.142) 0:00:16.591 ******* 2025-02-10 09:17:13.205651 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:17:13.205852 | orchestrator | 2025-02-10 09:17:13.205878 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-02-10 09:17:13.207968 | orchestrator | Monday 10 February 2025 09:17:13 +0000 (0:00:00.725) 0:00:17.316 ******* 2025-02-10 09:17:13.673657 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:17:13.673897 | orchestrator | 2025-02-10 09:17:13.674613 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-02-10 09:17:13.674649 | orchestrator | Monday 10 February 2025 09:17:13 +0000 (0:00:00.468) 0:00:17.784 ******* 2025-02-10 09:17:14.155115 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:17:14.155544 | orchestrator | 2025-02-10 09:17:14.155848 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-02-10 09:17:14.301739 | orchestrator | Monday 10 February 2025 09:17:14 +0000 (0:00:00.481) 0:00:18.266 ******* 2025-02-10 09:17:14.301898 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:17:14.301973 | orchestrator | 2025-02-10 09:17:14.301995 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-02-10 09:17:14.303723 | orchestrator | Monday 10 February 2025 09:17:14 +0000 (0:00:00.146) 0:00:18.412 ******* 2025-02-10 09:17:14.425960 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:14.426898 | orchestrator | 2025-02-10 09:17:14.428061 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-02-10 09:17:14.428848 | orchestrator | Monday 10 February 2025 09:17:14 +0000 (0:00:00.124) 0:00:18.537 ******* 2025-02-10 09:17:14.533101 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:14.533603 | orchestrator | 2025-02-10 09:17:14.533644 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-02-10 09:17:14.534115 | orchestrator | Monday 10 February 2025 09:17:14 +0000 (0:00:00.105) 0:00:18.643 ******* 2025-02-10 09:17:14.664038 | orchestrator | ok: [testbed-node-3] => { 2025-02-10 09:17:14.664326 | orchestrator |  "vgs_report": { 2025-02-10 09:17:14.665773 | orchestrator |  "vg": [] 2025-02-10 09:17:14.665918 | orchestrator |  } 2025-02-10 09:17:14.666695 | orchestrator | } 2025-02-10 09:17:14.667383 | orchestrator | 2025-02-10 09:17:14.668090 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-02-10 09:17:14.669127 | orchestrator | Monday 10 February 2025 09:17:14 +0000 (0:00:00.132) 0:00:18.775 ******* 2025-02-10 09:17:14.797639 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:14.799382 | orchestrator | 2025-02-10 09:17:14.800272 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-02-10 09:17:14.801108 | orchestrator | Monday 10 February 2025 09:17:14 +0000 (0:00:00.134) 0:00:18.909 ******* 2025-02-10 09:17:14.930123 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:14.930510 | orchestrator | 2025-02-10 09:17:14.932584 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-02-10 09:17:15.106802 | orchestrator | Monday 10 February 2025 09:17:14 +0000 (0:00:00.131) 0:00:19.041 ******* 2025-02-10 09:17:15.106951 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:15.107035 | orchestrator | 2025-02-10 09:17:15.107635 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-02-10 09:17:15.108662 | orchestrator | Monday 10 February 2025 09:17:15 +0000 (0:00:00.175) 0:00:19.217 ******* 2025-02-10 09:17:15.256184 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:15.257498 | orchestrator | 2025-02-10 09:17:15.258138 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-02-10 09:17:15.258174 | orchestrator | Monday 10 February 2025 09:17:15 +0000 (0:00:00.149) 0:00:19.366 ******* 2025-02-10 09:17:15.410431 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:15.767829 | orchestrator | 2025-02-10 09:17:15.767977 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-02-10 09:17:15.767997 | orchestrator | Monday 10 February 2025 09:17:15 +0000 (0:00:00.153) 0:00:19.520 ******* 2025-02-10 09:17:15.768032 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:15.769049 | orchestrator | 2025-02-10 09:17:15.770005 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-02-10 09:17:15.772291 | orchestrator | Monday 10 February 2025 09:17:15 +0000 (0:00:00.359) 0:00:19.879 ******* 2025-02-10 09:17:15.914104 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:16.052058 | orchestrator | 2025-02-10 09:17:16.052160 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-02-10 09:17:16.052170 | orchestrator | Monday 10 February 2025 09:17:15 +0000 (0:00:00.145) 0:00:20.024 ******* 2025-02-10 09:17:16.052190 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:16.053032 | orchestrator | 2025-02-10 09:17:16.053061 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-02-10 09:17:16.053074 | orchestrator | Monday 10 February 2025 09:17:16 +0000 (0:00:00.139) 0:00:20.164 ******* 2025-02-10 09:17:16.181640 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:16.183756 | orchestrator | 2025-02-10 09:17:16.183898 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-02-10 09:17:16.183924 | orchestrator | Monday 10 February 2025 09:17:16 +0000 (0:00:00.128) 0:00:20.293 ******* 2025-02-10 09:17:16.318606 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:16.319389 | orchestrator | 2025-02-10 09:17:16.320162 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-02-10 09:17:16.320208 | orchestrator | Monday 10 February 2025 09:17:16 +0000 (0:00:00.134) 0:00:20.427 ******* 2025-02-10 09:17:16.452294 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:16.452554 | orchestrator | 2025-02-10 09:17:16.455084 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-02-10 09:17:16.584706 | orchestrator | Monday 10 February 2025 09:17:16 +0000 (0:00:00.134) 0:00:20.562 ******* 2025-02-10 09:17:16.584863 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:16.584972 | orchestrator | 2025-02-10 09:17:16.585737 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-02-10 09:17:16.587053 | orchestrator | Monday 10 February 2025 09:17:16 +0000 (0:00:00.133) 0:00:20.695 ******* 2025-02-10 09:17:16.719005 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:16.719365 | orchestrator | 2025-02-10 09:17:16.720292 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-02-10 09:17:16.721415 | orchestrator | Monday 10 February 2025 09:17:16 +0000 (0:00:00.134) 0:00:20.830 ******* 2025-02-10 09:17:16.837763 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:16.838590 | orchestrator | 2025-02-10 09:17:16.839225 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-02-10 09:17:16.840109 | orchestrator | Monday 10 February 2025 09:17:16 +0000 (0:00:00.119) 0:00:20.949 ******* 2025-02-10 09:17:17.002540 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-70e6c2b1-f69e-5685-9251-bc72a13d87ec', 'data_vg': 'ceph-70e6c2b1-f69e-5685-9251-bc72a13d87ec'})  2025-02-10 09:17:17.002812 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f3b4a615-299b-50bf-af8e-26b6dc38e729', 'data_vg': 'ceph-f3b4a615-299b-50bf-af8e-26b6dc38e729'})  2025-02-10 09:17:17.003484 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:17.004559 | orchestrator | 2025-02-10 09:17:17.005561 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-02-10 09:17:17.005823 | orchestrator | Monday 10 February 2025 09:17:16 +0000 (0:00:00.163) 0:00:21.113 ******* 2025-02-10 09:17:17.155132 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-70e6c2b1-f69e-5685-9251-bc72a13d87ec', 'data_vg': 'ceph-70e6c2b1-f69e-5685-9251-bc72a13d87ec'})  2025-02-10 09:17:17.155673 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f3b4a615-299b-50bf-af8e-26b6dc38e729', 'data_vg': 'ceph-f3b4a615-299b-50bf-af8e-26b6dc38e729'})  2025-02-10 09:17:17.156339 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:17.157432 | orchestrator | 2025-02-10 09:17:17.158183 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-02-10 09:17:17.159006 | orchestrator | Monday 10 February 2025 09:17:17 +0000 (0:00:00.153) 0:00:21.267 ******* 2025-02-10 09:17:17.463574 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-70e6c2b1-f69e-5685-9251-bc72a13d87ec', 'data_vg': 'ceph-70e6c2b1-f69e-5685-9251-bc72a13d87ec'})  2025-02-10 09:17:17.464164 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f3b4a615-299b-50bf-af8e-26b6dc38e729', 'data_vg': 'ceph-f3b4a615-299b-50bf-af8e-26b6dc38e729'})  2025-02-10 09:17:17.464999 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:17.466351 | orchestrator | 2025-02-10 09:17:17.467205 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-02-10 09:17:17.468215 | orchestrator | Monday 10 February 2025 09:17:17 +0000 (0:00:00.307) 0:00:21.574 ******* 2025-02-10 09:17:17.633516 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-70e6c2b1-f69e-5685-9251-bc72a13d87ec', 'data_vg': 'ceph-70e6c2b1-f69e-5685-9251-bc72a13d87ec'})  2025-02-10 09:17:17.635538 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f3b4a615-299b-50bf-af8e-26b6dc38e729', 'data_vg': 'ceph-f3b4a615-299b-50bf-af8e-26b6dc38e729'})  2025-02-10 09:17:17.635681 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:17.635711 | orchestrator | 2025-02-10 09:17:17.636384 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-02-10 09:17:17.636756 | orchestrator | Monday 10 February 2025 09:17:17 +0000 (0:00:00.170) 0:00:21.744 ******* 2025-02-10 09:17:17.796309 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-70e6c2b1-f69e-5685-9251-bc72a13d87ec', 'data_vg': 'ceph-70e6c2b1-f69e-5685-9251-bc72a13d87ec'})  2025-02-10 09:17:17.797328 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f3b4a615-299b-50bf-af8e-26b6dc38e729', 'data_vg': 'ceph-f3b4a615-299b-50bf-af8e-26b6dc38e729'})  2025-02-10 09:17:17.798133 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:17.801578 | orchestrator | 2025-02-10 09:17:17.804544 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-02-10 09:17:17.949553 | orchestrator | Monday 10 February 2025 09:17:17 +0000 (0:00:00.163) 0:00:21.908 ******* 2025-02-10 09:17:17.949680 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-70e6c2b1-f69e-5685-9251-bc72a13d87ec', 'data_vg': 'ceph-70e6c2b1-f69e-5685-9251-bc72a13d87ec'})  2025-02-10 09:17:17.950812 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f3b4a615-299b-50bf-af8e-26b6dc38e729', 'data_vg': 'ceph-f3b4a615-299b-50bf-af8e-26b6dc38e729'})  2025-02-10 09:17:17.952095 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:17.953199 | orchestrator | 2025-02-10 09:17:17.954229 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-02-10 09:17:17.954929 | orchestrator | Monday 10 February 2025 09:17:17 +0000 (0:00:00.152) 0:00:22.060 ******* 2025-02-10 09:17:18.118212 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-70e6c2b1-f69e-5685-9251-bc72a13d87ec', 'data_vg': 'ceph-70e6c2b1-f69e-5685-9251-bc72a13d87ec'})  2025-02-10 09:17:18.119073 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f3b4a615-299b-50bf-af8e-26b6dc38e729', 'data_vg': 'ceph-f3b4a615-299b-50bf-af8e-26b6dc38e729'})  2025-02-10 09:17:18.120751 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:18.122113 | orchestrator | 2025-02-10 09:17:18.122703 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-02-10 09:17:18.123227 | orchestrator | Monday 10 February 2025 09:17:18 +0000 (0:00:00.169) 0:00:22.230 ******* 2025-02-10 09:17:18.298638 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-70e6c2b1-f69e-5685-9251-bc72a13d87ec', 'data_vg': 'ceph-70e6c2b1-f69e-5685-9251-bc72a13d87ec'})  2025-02-10 09:17:18.299161 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f3b4a615-299b-50bf-af8e-26b6dc38e729', 'data_vg': 'ceph-f3b4a615-299b-50bf-af8e-26b6dc38e729'})  2025-02-10 09:17:18.301234 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:18.301284 | orchestrator | 2025-02-10 09:17:18.302256 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-02-10 09:17:18.302614 | orchestrator | Monday 10 February 2025 09:17:18 +0000 (0:00:00.177) 0:00:22.407 ******* 2025-02-10 09:17:18.790102 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:17:18.790338 | orchestrator | 2025-02-10 09:17:18.790374 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-02-10 09:17:19.266828 | orchestrator | Monday 10 February 2025 09:17:18 +0000 (0:00:00.495) 0:00:22.902 ******* 2025-02-10 09:17:19.266999 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:17:19.267923 | orchestrator | 2025-02-10 09:17:19.274600 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-02-10 09:17:19.397415 | orchestrator | Monday 10 February 2025 09:17:19 +0000 (0:00:00.474) 0:00:23.377 ******* 2025-02-10 09:17:19.397619 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:17:19.397698 | orchestrator | 2025-02-10 09:17:19.398093 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-02-10 09:17:19.399057 | orchestrator | Monday 10 February 2025 09:17:19 +0000 (0:00:00.131) 0:00:23.508 ******* 2025-02-10 09:17:19.569048 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-70e6c2b1-f69e-5685-9251-bc72a13d87ec', 'vg_name': 'ceph-70e6c2b1-f69e-5685-9251-bc72a13d87ec'}) 2025-02-10 09:17:19.569376 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-f3b4a615-299b-50bf-af8e-26b6dc38e729', 'vg_name': 'ceph-f3b4a615-299b-50bf-af8e-26b6dc38e729'}) 2025-02-10 09:17:19.569415 | orchestrator | 2025-02-10 09:17:19.569685 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-02-10 09:17:19.569956 | orchestrator | Monday 10 February 2025 09:17:19 +0000 (0:00:00.172) 0:00:23.681 ******* 2025-02-10 09:17:19.721785 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-70e6c2b1-f69e-5685-9251-bc72a13d87ec', 'data_vg': 'ceph-70e6c2b1-f69e-5685-9251-bc72a13d87ec'})  2025-02-10 09:17:19.721960 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f3b4a615-299b-50bf-af8e-26b6dc38e729', 'data_vg': 'ceph-f3b4a615-299b-50bf-af8e-26b6dc38e729'})  2025-02-10 09:17:19.722734 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:19.723539 | orchestrator | 2025-02-10 09:17:19.724425 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-02-10 09:17:19.724863 | orchestrator | Monday 10 February 2025 09:17:19 +0000 (0:00:00.152) 0:00:23.833 ******* 2025-02-10 09:17:20.015624 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-70e6c2b1-f69e-5685-9251-bc72a13d87ec', 'data_vg': 'ceph-70e6c2b1-f69e-5685-9251-bc72a13d87ec'})  2025-02-10 09:17:20.017307 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f3b4a615-299b-50bf-af8e-26b6dc38e729', 'data_vg': 'ceph-f3b4a615-299b-50bf-af8e-26b6dc38e729'})  2025-02-10 09:17:20.017792 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:20.018492 | orchestrator | 2025-02-10 09:17:20.019068 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-02-10 09:17:20.019818 | orchestrator | Monday 10 February 2025 09:17:20 +0000 (0:00:00.293) 0:00:24.126 ******* 2025-02-10 09:17:20.172794 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-70e6c2b1-f69e-5685-9251-bc72a13d87ec', 'data_vg': 'ceph-70e6c2b1-f69e-5685-9251-bc72a13d87ec'})  2025-02-10 09:17:20.173371 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f3b4a615-299b-50bf-af8e-26b6dc38e729', 'data_vg': 'ceph-f3b4a615-299b-50bf-af8e-26b6dc38e729'})  2025-02-10 09:17:20.173516 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:20.173589 | orchestrator | 2025-02-10 09:17:20.173612 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-02-10 09:17:20.827260 | orchestrator | Monday 10 February 2025 09:17:20 +0000 (0:00:00.157) 0:00:24.284 ******* 2025-02-10 09:17:20.827423 | orchestrator | ok: [testbed-node-3] => { 2025-02-10 09:17:20.829148 | orchestrator |  "lvm_report": { 2025-02-10 09:17:20.830074 | orchestrator |  "lv": [ 2025-02-10 09:17:20.830107 | orchestrator |  { 2025-02-10 09:17:20.831812 | orchestrator |  "lv_name": "osd-block-70e6c2b1-f69e-5685-9251-bc72a13d87ec", 2025-02-10 09:17:20.833490 | orchestrator |  "vg_name": "ceph-70e6c2b1-f69e-5685-9251-bc72a13d87ec" 2025-02-10 09:17:20.834085 | orchestrator |  }, 2025-02-10 09:17:20.835382 | orchestrator |  { 2025-02-10 09:17:20.835948 | orchestrator |  "lv_name": "osd-block-f3b4a615-299b-50bf-af8e-26b6dc38e729", 2025-02-10 09:17:20.837019 | orchestrator |  "vg_name": "ceph-f3b4a615-299b-50bf-af8e-26b6dc38e729" 2025-02-10 09:17:20.838049 | orchestrator |  } 2025-02-10 09:17:20.839045 | orchestrator |  ], 2025-02-10 09:17:20.839632 | orchestrator |  "pv": [ 2025-02-10 09:17:20.840420 | orchestrator |  { 2025-02-10 09:17:20.840810 | orchestrator |  "pv_name": "/dev/sdb", 2025-02-10 09:17:20.841687 | orchestrator |  "vg_name": "ceph-70e6c2b1-f69e-5685-9251-bc72a13d87ec" 2025-02-10 09:17:20.842175 | orchestrator |  }, 2025-02-10 09:17:20.842649 | orchestrator |  { 2025-02-10 09:17:20.843166 | orchestrator |  "pv_name": "/dev/sdc", 2025-02-10 09:17:20.843663 | orchestrator |  "vg_name": "ceph-f3b4a615-299b-50bf-af8e-26b6dc38e729" 2025-02-10 09:17:20.844146 | orchestrator |  } 2025-02-10 09:17:20.844799 | orchestrator |  ] 2025-02-10 09:17:20.845495 | orchestrator |  } 2025-02-10 09:17:20.845731 | orchestrator | } 2025-02-10 09:17:20.846447 | orchestrator | 2025-02-10 09:17:20.846782 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-02-10 09:17:20.847313 | orchestrator | 2025-02-10 09:17:20.847870 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-02-10 09:17:20.848109 | orchestrator | Monday 10 February 2025 09:17:20 +0000 (0:00:00.653) 0:00:24.937 ******* 2025-02-10 09:17:21.065438 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-02-10 09:17:21.066771 | orchestrator | 2025-02-10 09:17:21.069894 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-02-10 09:17:21.072337 | orchestrator | Monday 10 February 2025 09:17:21 +0000 (0:00:00.237) 0:00:25.175 ******* 2025-02-10 09:17:21.562678 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:17:21.562867 | orchestrator | 2025-02-10 09:17:21.564721 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:17:21.565421 | orchestrator | Monday 10 February 2025 09:17:21 +0000 (0:00:00.498) 0:00:25.673 ******* 2025-02-10 09:17:21.988617 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-02-10 09:17:21.988809 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-02-10 09:17:21.989150 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-02-10 09:17:21.990096 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-02-10 09:17:21.991618 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-02-10 09:17:21.992231 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-02-10 09:17:21.993082 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-02-10 09:17:21.993953 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-02-10 09:17:21.994358 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-02-10 09:17:21.994756 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-02-10 09:17:21.995339 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-02-10 09:17:21.995755 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-02-10 09:17:21.996332 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-02-10 09:17:21.996858 | orchestrator | 2025-02-10 09:17:21.997373 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:17:21.997834 | orchestrator | Monday 10 February 2025 09:17:21 +0000 (0:00:00.426) 0:00:26.099 ******* 2025-02-10 09:17:22.186553 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:22.186757 | orchestrator | 2025-02-10 09:17:22.186790 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:17:22.187079 | orchestrator | Monday 10 February 2025 09:17:22 +0000 (0:00:00.197) 0:00:26.297 ******* 2025-02-10 09:17:22.356113 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:22.356496 | orchestrator | 2025-02-10 09:17:22.358261 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:17:22.359208 | orchestrator | Monday 10 February 2025 09:17:22 +0000 (0:00:00.170) 0:00:26.467 ******* 2025-02-10 09:17:22.545994 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:22.546579 | orchestrator | 2025-02-10 09:17:22.546618 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:17:22.548058 | orchestrator | Monday 10 February 2025 09:17:22 +0000 (0:00:00.189) 0:00:26.656 ******* 2025-02-10 09:17:22.709943 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:22.711113 | orchestrator | 2025-02-10 09:17:22.711194 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:17:22.913360 | orchestrator | Monday 10 February 2025 09:17:22 +0000 (0:00:00.165) 0:00:26.822 ******* 2025-02-10 09:17:22.913545 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:22.913633 | orchestrator | 2025-02-10 09:17:22.913652 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:17:23.100027 | orchestrator | Monday 10 February 2025 09:17:22 +0000 (0:00:00.202) 0:00:27.025 ******* 2025-02-10 09:17:23.100182 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:23.100286 | orchestrator | 2025-02-10 09:17:23.103068 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:17:23.285919 | orchestrator | Monday 10 February 2025 09:17:23 +0000 (0:00:00.186) 0:00:27.211 ******* 2025-02-10 09:17:23.286200 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:23.286393 | orchestrator | 2025-02-10 09:17:23.287747 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:17:23.287877 | orchestrator | Monday 10 February 2025 09:17:23 +0000 (0:00:00.185) 0:00:27.397 ******* 2025-02-10 09:17:23.479544 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:24.277388 | orchestrator | 2025-02-10 09:17:24.277572 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:17:24.277596 | orchestrator | Monday 10 February 2025 09:17:23 +0000 (0:00:00.191) 0:00:27.588 ******* 2025-02-10 09:17:24.277628 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4aefdc38-3054-474e-a34a-07d97ce8643d) 2025-02-10 09:17:24.277885 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4aefdc38-3054-474e-a34a-07d97ce8643d) 2025-02-10 09:17:24.280276 | orchestrator | 2025-02-10 09:17:24.280972 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:17:24.281490 | orchestrator | Monday 10 February 2025 09:17:24 +0000 (0:00:00.799) 0:00:28.388 ******* 2025-02-10 09:17:24.782170 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_103f3392-831d-4ee6-b0f0-d6be015816d3) 2025-02-10 09:17:24.782773 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_103f3392-831d-4ee6-b0f0-d6be015816d3) 2025-02-10 09:17:24.783896 | orchestrator | 2025-02-10 09:17:24.784382 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:17:24.784948 | orchestrator | Monday 10 February 2025 09:17:24 +0000 (0:00:00.504) 0:00:28.892 ******* 2025-02-10 09:17:25.236797 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_23794fae-2c08-458a-becf-a15050b8218b) 2025-02-10 09:17:25.237642 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_23794fae-2c08-458a-becf-a15050b8218b) 2025-02-10 09:17:25.238282 | orchestrator | 2025-02-10 09:17:25.239080 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:17:25.239647 | orchestrator | Monday 10 February 2025 09:17:25 +0000 (0:00:00.456) 0:00:29.348 ******* 2025-02-10 09:17:25.676286 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_492baa9f-f661-44dd-a3d2-70d79942748c) 2025-02-10 09:17:25.676517 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_492baa9f-f661-44dd-a3d2-70d79942748c) 2025-02-10 09:17:25.678433 | orchestrator | 2025-02-10 09:17:25.679056 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:17:25.679873 | orchestrator | Monday 10 February 2025 09:17:25 +0000 (0:00:00.436) 0:00:29.785 ******* 2025-02-10 09:17:25.984303 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-02-10 09:17:25.985047 | orchestrator | 2025-02-10 09:17:25.985120 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:25.985456 | orchestrator | Monday 10 February 2025 09:17:25 +0000 (0:00:00.310) 0:00:30.095 ******* 2025-02-10 09:17:26.458758 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-02-10 09:17:26.460997 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-02-10 09:17:26.461099 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-02-10 09:17:26.461658 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-02-10 09:17:26.462011 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-02-10 09:17:26.462646 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-02-10 09:17:26.462744 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-02-10 09:17:26.463198 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-02-10 09:17:26.464493 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-02-10 09:17:26.465803 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-02-10 09:17:26.467369 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-02-10 09:17:26.468258 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-02-10 09:17:26.469304 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-02-10 09:17:26.470076 | orchestrator | 2025-02-10 09:17:26.471216 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:26.471804 | orchestrator | Monday 10 February 2025 09:17:26 +0000 (0:00:00.473) 0:00:30.569 ******* 2025-02-10 09:17:26.657129 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:26.657418 | orchestrator | 2025-02-10 09:17:26.658453 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:26.659540 | orchestrator | Monday 10 February 2025 09:17:26 +0000 (0:00:00.198) 0:00:30.767 ******* 2025-02-10 09:17:26.853902 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:26.855353 | orchestrator | 2025-02-10 09:17:26.855602 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:26.856495 | orchestrator | Monday 10 February 2025 09:17:26 +0000 (0:00:00.197) 0:00:30.964 ******* 2025-02-10 09:17:27.042647 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:27.042809 | orchestrator | 2025-02-10 09:17:27.045009 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:27.477274 | orchestrator | Monday 10 February 2025 09:17:27 +0000 (0:00:00.189) 0:00:31.154 ******* 2025-02-10 09:17:27.477506 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:27.478714 | orchestrator | 2025-02-10 09:17:27.652879 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:27.653017 | orchestrator | Monday 10 February 2025 09:17:27 +0000 (0:00:00.433) 0:00:31.587 ******* 2025-02-10 09:17:27.653054 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:27.653884 | orchestrator | 2025-02-10 09:17:27.655281 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:27.656077 | orchestrator | Monday 10 February 2025 09:17:27 +0000 (0:00:00.177) 0:00:31.765 ******* 2025-02-10 09:17:27.835332 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:27.836965 | orchestrator | 2025-02-10 09:17:27.837109 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:27.837325 | orchestrator | Monday 10 February 2025 09:17:27 +0000 (0:00:00.180) 0:00:31.946 ******* 2025-02-10 09:17:28.008922 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:28.009097 | orchestrator | 2025-02-10 09:17:28.009156 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:28.009177 | orchestrator | Monday 10 February 2025 09:17:28 +0000 (0:00:00.175) 0:00:32.121 ******* 2025-02-10 09:17:28.192804 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:28.192997 | orchestrator | 2025-02-10 09:17:28.195342 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:28.858379 | orchestrator | Monday 10 February 2025 09:17:28 +0000 (0:00:00.183) 0:00:32.304 ******* 2025-02-10 09:17:28.858560 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-02-10 09:17:28.858642 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-02-10 09:17:28.858950 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-02-10 09:17:28.859629 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-02-10 09:17:28.861697 | orchestrator | 2025-02-10 09:17:29.045039 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:29.045194 | orchestrator | Monday 10 February 2025 09:17:28 +0000 (0:00:00.665) 0:00:32.970 ******* 2025-02-10 09:17:29.045233 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:29.227915 | orchestrator | 2025-02-10 09:17:29.228047 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:29.228067 | orchestrator | Monday 10 February 2025 09:17:29 +0000 (0:00:00.186) 0:00:33.157 ******* 2025-02-10 09:17:29.228100 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:29.228171 | orchestrator | 2025-02-10 09:17:29.228504 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:29.228901 | orchestrator | Monday 10 February 2025 09:17:29 +0000 (0:00:00.182) 0:00:33.339 ******* 2025-02-10 09:17:29.396937 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:29.397177 | orchestrator | 2025-02-10 09:17:29.397708 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:29.397999 | orchestrator | Monday 10 February 2025 09:17:29 +0000 (0:00:00.169) 0:00:33.509 ******* 2025-02-10 09:17:29.585608 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:29.586957 | orchestrator | 2025-02-10 09:17:29.587399 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-02-10 09:17:29.866304 | orchestrator | Monday 10 February 2025 09:17:29 +0000 (0:00:00.186) 0:00:33.696 ******* 2025-02-10 09:17:29.866439 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:29.866594 | orchestrator | 2025-02-10 09:17:29.866623 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-02-10 09:17:29.866807 | orchestrator | Monday 10 February 2025 09:17:29 +0000 (0:00:00.281) 0:00:33.977 ******* 2025-02-10 09:17:30.055149 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5101bad7-da03-58be-8044-cbe4500fcec9'}}) 2025-02-10 09:17:30.055414 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd59ecc87-3940-56cd-881a-fbc914ec02de'}}) 2025-02-10 09:17:30.055708 | orchestrator | 2025-02-10 09:17:30.056068 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-02-10 09:17:30.057211 | orchestrator | Monday 10 February 2025 09:17:30 +0000 (0:00:00.189) 0:00:34.167 ******* 2025-02-10 09:17:31.531209 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-5101bad7-da03-58be-8044-cbe4500fcec9', 'data_vg': 'ceph-5101bad7-da03-58be-8044-cbe4500fcec9'}) 2025-02-10 09:17:31.531404 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-d59ecc87-3940-56cd-881a-fbc914ec02de', 'data_vg': 'ceph-d59ecc87-3940-56cd-881a-fbc914ec02de'}) 2025-02-10 09:17:31.531432 | orchestrator | 2025-02-10 09:17:31.531949 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-02-10 09:17:31.532164 | orchestrator | Monday 10 February 2025 09:17:31 +0000 (0:00:01.476) 0:00:35.643 ******* 2025-02-10 09:17:31.681960 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5101bad7-da03-58be-8044-cbe4500fcec9', 'data_vg': 'ceph-5101bad7-da03-58be-8044-cbe4500fcec9'})  2025-02-10 09:17:31.682976 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d59ecc87-3940-56cd-881a-fbc914ec02de', 'data_vg': 'ceph-d59ecc87-3940-56cd-881a-fbc914ec02de'})  2025-02-10 09:17:31.685089 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:31.685659 | orchestrator | 2025-02-10 09:17:31.685696 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-02-10 09:17:31.685717 | orchestrator | Monday 10 February 2025 09:17:31 +0000 (0:00:00.150) 0:00:35.794 ******* 2025-02-10 09:17:32.785286 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-5101bad7-da03-58be-8044-cbe4500fcec9', 'data_vg': 'ceph-5101bad7-da03-58be-8044-cbe4500fcec9'}) 2025-02-10 09:17:32.786649 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-d59ecc87-3940-56cd-881a-fbc914ec02de', 'data_vg': 'ceph-d59ecc87-3940-56cd-881a-fbc914ec02de'}) 2025-02-10 09:17:32.787226 | orchestrator | 2025-02-10 09:17:32.787857 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-02-10 09:17:32.788372 | orchestrator | Monday 10 February 2025 09:17:32 +0000 (0:00:01.101) 0:00:36.895 ******* 2025-02-10 09:17:32.938547 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5101bad7-da03-58be-8044-cbe4500fcec9', 'data_vg': 'ceph-5101bad7-da03-58be-8044-cbe4500fcec9'})  2025-02-10 09:17:32.938921 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d59ecc87-3940-56cd-881a-fbc914ec02de', 'data_vg': 'ceph-d59ecc87-3940-56cd-881a-fbc914ec02de'})  2025-02-10 09:17:32.939439 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:32.939859 | orchestrator | 2025-02-10 09:17:32.940434 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-02-10 09:17:32.940875 | orchestrator | Monday 10 February 2025 09:17:32 +0000 (0:00:00.154) 0:00:37.050 ******* 2025-02-10 09:17:33.081140 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:33.082142 | orchestrator | 2025-02-10 09:17:33.082909 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-02-10 09:17:33.083837 | orchestrator | Monday 10 February 2025 09:17:33 +0000 (0:00:00.141) 0:00:37.192 ******* 2025-02-10 09:17:33.225642 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5101bad7-da03-58be-8044-cbe4500fcec9', 'data_vg': 'ceph-5101bad7-da03-58be-8044-cbe4500fcec9'})  2025-02-10 09:17:33.226380 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d59ecc87-3940-56cd-881a-fbc914ec02de', 'data_vg': 'ceph-d59ecc87-3940-56cd-881a-fbc914ec02de'})  2025-02-10 09:17:33.226414 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:33.226597 | orchestrator | 2025-02-10 09:17:33.227107 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-02-10 09:17:33.227600 | orchestrator | Monday 10 February 2025 09:17:33 +0000 (0:00:00.144) 0:00:37.336 ******* 2025-02-10 09:17:33.360593 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:33.361752 | orchestrator | 2025-02-10 09:17:33.362747 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-02-10 09:17:33.363507 | orchestrator | Monday 10 February 2025 09:17:33 +0000 (0:00:00.134) 0:00:37.471 ******* 2025-02-10 09:17:33.520020 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5101bad7-da03-58be-8044-cbe4500fcec9', 'data_vg': 'ceph-5101bad7-da03-58be-8044-cbe4500fcec9'})  2025-02-10 09:17:33.520214 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d59ecc87-3940-56cd-881a-fbc914ec02de', 'data_vg': 'ceph-d59ecc87-3940-56cd-881a-fbc914ec02de'})  2025-02-10 09:17:33.520243 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:33.520303 | orchestrator | 2025-02-10 09:17:33.520937 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-02-10 09:17:33.521145 | orchestrator | Monday 10 February 2025 09:17:33 +0000 (0:00:00.160) 0:00:37.632 ******* 2025-02-10 09:17:33.863852 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:33.864909 | orchestrator | 2025-02-10 09:17:33.865575 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-02-10 09:17:33.866205 | orchestrator | Monday 10 February 2025 09:17:33 +0000 (0:00:00.341) 0:00:37.973 ******* 2025-02-10 09:17:34.071647 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5101bad7-da03-58be-8044-cbe4500fcec9', 'data_vg': 'ceph-5101bad7-da03-58be-8044-cbe4500fcec9'})  2025-02-10 09:17:34.072070 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d59ecc87-3940-56cd-881a-fbc914ec02de', 'data_vg': 'ceph-d59ecc87-3940-56cd-881a-fbc914ec02de'})  2025-02-10 09:17:34.072190 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:34.072949 | orchestrator | 2025-02-10 09:17:34.073579 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-02-10 09:17:34.073909 | orchestrator | Monday 10 February 2025 09:17:34 +0000 (0:00:00.209) 0:00:38.183 ******* 2025-02-10 09:17:34.254365 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:17:34.254758 | orchestrator | 2025-02-10 09:17:34.255569 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-02-10 09:17:34.255832 | orchestrator | Monday 10 February 2025 09:17:34 +0000 (0:00:00.183) 0:00:38.366 ******* 2025-02-10 09:17:34.430668 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5101bad7-da03-58be-8044-cbe4500fcec9', 'data_vg': 'ceph-5101bad7-da03-58be-8044-cbe4500fcec9'})  2025-02-10 09:17:34.430846 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d59ecc87-3940-56cd-881a-fbc914ec02de', 'data_vg': 'ceph-d59ecc87-3940-56cd-881a-fbc914ec02de'})  2025-02-10 09:17:34.431157 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:34.431809 | orchestrator | 2025-02-10 09:17:34.432675 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-02-10 09:17:34.433077 | orchestrator | Monday 10 February 2025 09:17:34 +0000 (0:00:00.174) 0:00:38.540 ******* 2025-02-10 09:17:34.614329 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5101bad7-da03-58be-8044-cbe4500fcec9', 'data_vg': 'ceph-5101bad7-da03-58be-8044-cbe4500fcec9'})  2025-02-10 09:17:34.614768 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d59ecc87-3940-56cd-881a-fbc914ec02de', 'data_vg': 'ceph-d59ecc87-3940-56cd-881a-fbc914ec02de'})  2025-02-10 09:17:34.614860 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:34.615916 | orchestrator | 2025-02-10 09:17:34.617032 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-02-10 09:17:34.618209 | orchestrator | Monday 10 February 2025 09:17:34 +0000 (0:00:00.184) 0:00:38.725 ******* 2025-02-10 09:17:34.781550 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5101bad7-da03-58be-8044-cbe4500fcec9', 'data_vg': 'ceph-5101bad7-da03-58be-8044-cbe4500fcec9'})  2025-02-10 09:17:34.781802 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d59ecc87-3940-56cd-881a-fbc914ec02de', 'data_vg': 'ceph-d59ecc87-3940-56cd-881a-fbc914ec02de'})  2025-02-10 09:17:34.782397 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:34.782640 | orchestrator | 2025-02-10 09:17:34.783384 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-02-10 09:17:34.784021 | orchestrator | Monday 10 February 2025 09:17:34 +0000 (0:00:00.166) 0:00:38.891 ******* 2025-02-10 09:17:34.934274 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:35.089161 | orchestrator | 2025-02-10 09:17:35.089299 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-02-10 09:17:35.089319 | orchestrator | Monday 10 February 2025 09:17:34 +0000 (0:00:00.151) 0:00:39.043 ******* 2025-02-10 09:17:35.089351 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:35.089780 | orchestrator | 2025-02-10 09:17:35.091093 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-02-10 09:17:35.091675 | orchestrator | Monday 10 February 2025 09:17:35 +0000 (0:00:00.156) 0:00:39.200 ******* 2025-02-10 09:17:35.242433 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:35.242913 | orchestrator | 2025-02-10 09:17:35.243396 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-02-10 09:17:35.244009 | orchestrator | Monday 10 February 2025 09:17:35 +0000 (0:00:00.152) 0:00:39.352 ******* 2025-02-10 09:17:35.412455 | orchestrator | ok: [testbed-node-4] => { 2025-02-10 09:17:35.414127 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-02-10 09:17:35.415852 | orchestrator | } 2025-02-10 09:17:35.417821 | orchestrator | 2025-02-10 09:17:35.419069 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-02-10 09:17:35.420281 | orchestrator | Monday 10 February 2025 09:17:35 +0000 (0:00:00.170) 0:00:39.522 ******* 2025-02-10 09:17:35.580241 | orchestrator | ok: [testbed-node-4] => { 2025-02-10 09:17:35.581652 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-02-10 09:17:35.581725 | orchestrator | } 2025-02-10 09:17:35.581820 | orchestrator | 2025-02-10 09:17:35.583663 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-02-10 09:17:35.584615 | orchestrator | Monday 10 February 2025 09:17:35 +0000 (0:00:00.168) 0:00:39.690 ******* 2025-02-10 09:17:35.714215 | orchestrator | ok: [testbed-node-4] => { 2025-02-10 09:17:35.714408 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-02-10 09:17:35.714434 | orchestrator | } 2025-02-10 09:17:35.714945 | orchestrator | 2025-02-10 09:17:35.715274 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-02-10 09:17:35.715540 | orchestrator | Monday 10 February 2025 09:17:35 +0000 (0:00:00.134) 0:00:39.824 ******* 2025-02-10 09:17:36.276207 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:17:36.276375 | orchestrator | 2025-02-10 09:17:36.278188 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-02-10 09:17:36.278733 | orchestrator | Monday 10 February 2025 09:17:36 +0000 (0:00:00.563) 0:00:40.388 ******* 2025-02-10 09:17:36.834544 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:17:36.834734 | orchestrator | 2025-02-10 09:17:36.835648 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-02-10 09:17:36.836030 | orchestrator | Monday 10 February 2025 09:17:36 +0000 (0:00:00.554) 0:00:40.942 ******* 2025-02-10 09:17:37.319606 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:17:37.319858 | orchestrator | 2025-02-10 09:17:37.319893 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-02-10 09:17:37.481643 | orchestrator | Monday 10 February 2025 09:17:37 +0000 (0:00:00.488) 0:00:41.431 ******* 2025-02-10 09:17:37.481796 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:17:37.481869 | orchestrator | 2025-02-10 09:17:37.482833 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-02-10 09:17:37.484540 | orchestrator | Monday 10 February 2025 09:17:37 +0000 (0:00:00.158) 0:00:41.590 ******* 2025-02-10 09:17:37.595981 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:37.596148 | orchestrator | 2025-02-10 09:17:37.596883 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-02-10 09:17:37.707412 | orchestrator | Monday 10 February 2025 09:17:37 +0000 (0:00:00.117) 0:00:41.708 ******* 2025-02-10 09:17:37.707642 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:37.707726 | orchestrator | 2025-02-10 09:17:37.707752 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-02-10 09:17:37.708797 | orchestrator | Monday 10 February 2025 09:17:37 +0000 (0:00:00.110) 0:00:41.818 ******* 2025-02-10 09:17:37.853734 | orchestrator | ok: [testbed-node-4] => { 2025-02-10 09:17:37.853933 | orchestrator |  "vgs_report": { 2025-02-10 09:17:37.854252 | orchestrator |  "vg": [] 2025-02-10 09:17:37.854277 | orchestrator |  } 2025-02-10 09:17:37.854700 | orchestrator | } 2025-02-10 09:17:37.855140 | orchestrator | 2025-02-10 09:17:37.855448 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-02-10 09:17:37.856050 | orchestrator | Monday 10 February 2025 09:17:37 +0000 (0:00:00.146) 0:00:41.965 ******* 2025-02-10 09:17:38.023682 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:38.023904 | orchestrator | 2025-02-10 09:17:38.024214 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-02-10 09:17:38.025420 | orchestrator | Monday 10 February 2025 09:17:38 +0000 (0:00:00.169) 0:00:42.134 ******* 2025-02-10 09:17:38.173574 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:38.174221 | orchestrator | 2025-02-10 09:17:38.174282 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-02-10 09:17:38.359694 | orchestrator | Monday 10 February 2025 09:17:38 +0000 (0:00:00.150) 0:00:42.285 ******* 2025-02-10 09:17:38.359897 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:38.360039 | orchestrator | 2025-02-10 09:17:38.360082 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-02-10 09:17:38.518253 | orchestrator | Monday 10 February 2025 09:17:38 +0000 (0:00:00.185) 0:00:42.470 ******* 2025-02-10 09:17:38.518394 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:38.526102 | orchestrator | 2025-02-10 09:17:38.527089 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-02-10 09:17:38.527147 | orchestrator | Monday 10 February 2025 09:17:38 +0000 (0:00:00.158) 0:00:42.629 ******* 2025-02-10 09:17:38.895688 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:38.896281 | orchestrator | 2025-02-10 09:17:38.896317 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-02-10 09:17:38.896342 | orchestrator | Monday 10 February 2025 09:17:38 +0000 (0:00:00.376) 0:00:43.006 ******* 2025-02-10 09:17:39.096966 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:39.097732 | orchestrator | 2025-02-10 09:17:39.098008 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-02-10 09:17:39.098546 | orchestrator | Monday 10 February 2025 09:17:39 +0000 (0:00:00.201) 0:00:43.207 ******* 2025-02-10 09:17:39.251838 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:39.253134 | orchestrator | 2025-02-10 09:17:39.256212 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-02-10 09:17:39.257432 | orchestrator | Monday 10 February 2025 09:17:39 +0000 (0:00:00.155) 0:00:43.362 ******* 2025-02-10 09:17:39.399057 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:39.400654 | orchestrator | 2025-02-10 09:17:39.403711 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-02-10 09:17:39.404260 | orchestrator | Monday 10 February 2025 09:17:39 +0000 (0:00:00.146) 0:00:43.509 ******* 2025-02-10 09:17:39.546396 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:39.547440 | orchestrator | 2025-02-10 09:17:39.548154 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-02-10 09:17:39.549768 | orchestrator | Monday 10 February 2025 09:17:39 +0000 (0:00:00.148) 0:00:43.658 ******* 2025-02-10 09:17:39.706399 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:39.706658 | orchestrator | 2025-02-10 09:17:39.707391 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-02-10 09:17:39.707915 | orchestrator | Monday 10 February 2025 09:17:39 +0000 (0:00:00.159) 0:00:43.817 ******* 2025-02-10 09:17:39.856315 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:39.856669 | orchestrator | 2025-02-10 09:17:39.859066 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-02-10 09:17:40.008687 | orchestrator | Monday 10 February 2025 09:17:39 +0000 (0:00:00.147) 0:00:43.965 ******* 2025-02-10 09:17:40.008840 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:40.009287 | orchestrator | 2025-02-10 09:17:40.009602 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-02-10 09:17:40.010722 | orchestrator | Monday 10 February 2025 09:17:40 +0000 (0:00:00.153) 0:00:44.119 ******* 2025-02-10 09:17:40.160205 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:40.163753 | orchestrator | 2025-02-10 09:17:40.164196 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-02-10 09:17:40.164265 | orchestrator | Monday 10 February 2025 09:17:40 +0000 (0:00:00.151) 0:00:44.270 ******* 2025-02-10 09:17:40.311790 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:40.312017 | orchestrator | 2025-02-10 09:17:40.312807 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-02-10 09:17:40.313591 | orchestrator | Monday 10 February 2025 09:17:40 +0000 (0:00:00.150) 0:00:44.421 ******* 2025-02-10 09:17:40.492536 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5101bad7-da03-58be-8044-cbe4500fcec9', 'data_vg': 'ceph-5101bad7-da03-58be-8044-cbe4500fcec9'})  2025-02-10 09:17:40.493006 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d59ecc87-3940-56cd-881a-fbc914ec02de', 'data_vg': 'ceph-d59ecc87-3940-56cd-881a-fbc914ec02de'})  2025-02-10 09:17:40.495408 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:40.496518 | orchestrator | 2025-02-10 09:17:40.497125 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-02-10 09:17:40.497876 | orchestrator | Monday 10 February 2025 09:17:40 +0000 (0:00:00.181) 0:00:44.602 ******* 2025-02-10 09:17:40.664573 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5101bad7-da03-58be-8044-cbe4500fcec9', 'data_vg': 'ceph-5101bad7-da03-58be-8044-cbe4500fcec9'})  2025-02-10 09:17:40.665220 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d59ecc87-3940-56cd-881a-fbc914ec02de', 'data_vg': 'ceph-d59ecc87-3940-56cd-881a-fbc914ec02de'})  2025-02-10 09:17:40.666732 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:40.668090 | orchestrator | 2025-02-10 09:17:40.669289 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-02-10 09:17:40.670237 | orchestrator | Monday 10 February 2025 09:17:40 +0000 (0:00:00.173) 0:00:44.775 ******* 2025-02-10 09:17:41.074911 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5101bad7-da03-58be-8044-cbe4500fcec9', 'data_vg': 'ceph-5101bad7-da03-58be-8044-cbe4500fcec9'})  2025-02-10 09:17:41.075174 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d59ecc87-3940-56cd-881a-fbc914ec02de', 'data_vg': 'ceph-d59ecc87-3940-56cd-881a-fbc914ec02de'})  2025-02-10 09:17:41.076579 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:41.077872 | orchestrator | 2025-02-10 09:17:41.079545 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-02-10 09:17:41.080519 | orchestrator | Monday 10 February 2025 09:17:41 +0000 (0:00:00.409) 0:00:45.185 ******* 2025-02-10 09:17:41.273266 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5101bad7-da03-58be-8044-cbe4500fcec9', 'data_vg': 'ceph-5101bad7-da03-58be-8044-cbe4500fcec9'})  2025-02-10 09:17:41.276270 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d59ecc87-3940-56cd-881a-fbc914ec02de', 'data_vg': 'ceph-d59ecc87-3940-56cd-881a-fbc914ec02de'})  2025-02-10 09:17:41.277197 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:41.277225 | orchestrator | 2025-02-10 09:17:41.277241 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-02-10 09:17:41.449757 | orchestrator | Monday 10 February 2025 09:17:41 +0000 (0:00:00.197) 0:00:45.382 ******* 2025-02-10 09:17:41.449925 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5101bad7-da03-58be-8044-cbe4500fcec9', 'data_vg': 'ceph-5101bad7-da03-58be-8044-cbe4500fcec9'})  2025-02-10 09:17:41.452517 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d59ecc87-3940-56cd-881a-fbc914ec02de', 'data_vg': 'ceph-d59ecc87-3940-56cd-881a-fbc914ec02de'})  2025-02-10 09:17:41.453263 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:41.453306 | orchestrator | 2025-02-10 09:17:41.454556 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-02-10 09:17:41.454781 | orchestrator | Monday 10 February 2025 09:17:41 +0000 (0:00:00.177) 0:00:45.560 ******* 2025-02-10 09:17:41.629126 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5101bad7-da03-58be-8044-cbe4500fcec9', 'data_vg': 'ceph-5101bad7-da03-58be-8044-cbe4500fcec9'})  2025-02-10 09:17:41.632203 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d59ecc87-3940-56cd-881a-fbc914ec02de', 'data_vg': 'ceph-d59ecc87-3940-56cd-881a-fbc914ec02de'})  2025-02-10 09:17:41.632615 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:41.632920 | orchestrator | 2025-02-10 09:17:41.633658 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-02-10 09:17:41.633985 | orchestrator | Monday 10 February 2025 09:17:41 +0000 (0:00:00.178) 0:00:45.738 ******* 2025-02-10 09:17:41.813429 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5101bad7-da03-58be-8044-cbe4500fcec9', 'data_vg': 'ceph-5101bad7-da03-58be-8044-cbe4500fcec9'})  2025-02-10 09:17:41.814423 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d59ecc87-3940-56cd-881a-fbc914ec02de', 'data_vg': 'ceph-d59ecc87-3940-56cd-881a-fbc914ec02de'})  2025-02-10 09:17:41.817177 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:41.986996 | orchestrator | 2025-02-10 09:17:41.987177 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-02-10 09:17:41.987201 | orchestrator | Monday 10 February 2025 09:17:41 +0000 (0:00:00.183) 0:00:45.922 ******* 2025-02-10 09:17:41.987238 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5101bad7-da03-58be-8044-cbe4500fcec9', 'data_vg': 'ceph-5101bad7-da03-58be-8044-cbe4500fcec9'})  2025-02-10 09:17:41.987352 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d59ecc87-3940-56cd-881a-fbc914ec02de', 'data_vg': 'ceph-d59ecc87-3940-56cd-881a-fbc914ec02de'})  2025-02-10 09:17:41.987741 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:41.987776 | orchestrator | 2025-02-10 09:17:41.988139 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-02-10 09:17:41.988414 | orchestrator | Monday 10 February 2025 09:17:41 +0000 (0:00:00.176) 0:00:46.098 ******* 2025-02-10 09:17:42.519028 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:17:42.519539 | orchestrator | 2025-02-10 09:17:42.520165 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-02-10 09:17:42.520943 | orchestrator | Monday 10 February 2025 09:17:42 +0000 (0:00:00.531) 0:00:46.629 ******* 2025-02-10 09:17:43.109097 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:17:43.109607 | orchestrator | 2025-02-10 09:17:43.109658 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-02-10 09:17:43.109764 | orchestrator | Monday 10 February 2025 09:17:43 +0000 (0:00:00.588) 0:00:47.218 ******* 2025-02-10 09:17:43.272018 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:17:43.272596 | orchestrator | 2025-02-10 09:17:43.272689 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-02-10 09:17:43.272759 | orchestrator | Monday 10 February 2025 09:17:43 +0000 (0:00:00.165) 0:00:47.383 ******* 2025-02-10 09:17:43.467941 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-5101bad7-da03-58be-8044-cbe4500fcec9', 'vg_name': 'ceph-5101bad7-da03-58be-8044-cbe4500fcec9'}) 2025-02-10 09:17:43.468277 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-d59ecc87-3940-56cd-881a-fbc914ec02de', 'vg_name': 'ceph-d59ecc87-3940-56cd-881a-fbc914ec02de'}) 2025-02-10 09:17:43.468907 | orchestrator | 2025-02-10 09:17:43.469712 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-02-10 09:17:43.470349 | orchestrator | Monday 10 February 2025 09:17:43 +0000 (0:00:00.195) 0:00:47.579 ******* 2025-02-10 09:17:43.893572 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5101bad7-da03-58be-8044-cbe4500fcec9', 'data_vg': 'ceph-5101bad7-da03-58be-8044-cbe4500fcec9'})  2025-02-10 09:17:43.893778 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d59ecc87-3940-56cd-881a-fbc914ec02de', 'data_vg': 'ceph-d59ecc87-3940-56cd-881a-fbc914ec02de'})  2025-02-10 09:17:43.894649 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:43.895423 | orchestrator | 2025-02-10 09:17:43.896493 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-02-10 09:17:43.897132 | orchestrator | Monday 10 February 2025 09:17:43 +0000 (0:00:00.424) 0:00:48.004 ******* 2025-02-10 09:17:44.098602 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5101bad7-da03-58be-8044-cbe4500fcec9', 'data_vg': 'ceph-5101bad7-da03-58be-8044-cbe4500fcec9'})  2025-02-10 09:17:44.098753 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d59ecc87-3940-56cd-881a-fbc914ec02de', 'data_vg': 'ceph-d59ecc87-3940-56cd-881a-fbc914ec02de'})  2025-02-10 09:17:44.099816 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:44.101144 | orchestrator | 2025-02-10 09:17:44.102206 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-02-10 09:17:44.103282 | orchestrator | Monday 10 February 2025 09:17:44 +0000 (0:00:00.205) 0:00:48.210 ******* 2025-02-10 09:17:44.324998 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5101bad7-da03-58be-8044-cbe4500fcec9', 'data_vg': 'ceph-5101bad7-da03-58be-8044-cbe4500fcec9'})  2025-02-10 09:17:44.329032 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d59ecc87-3940-56cd-881a-fbc914ec02de', 'data_vg': 'ceph-d59ecc87-3940-56cd-881a-fbc914ec02de'})  2025-02-10 09:17:44.330088 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:44.330836 | orchestrator | 2025-02-10 09:17:44.331528 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-02-10 09:17:44.332340 | orchestrator | Monday 10 February 2025 09:17:44 +0000 (0:00:00.223) 0:00:48.433 ******* 2025-02-10 09:17:45.225655 | orchestrator | ok: [testbed-node-4] => { 2025-02-10 09:17:45.226077 | orchestrator |  "lvm_report": { 2025-02-10 09:17:45.228722 | orchestrator |  "lv": [ 2025-02-10 09:17:45.229119 | orchestrator |  { 2025-02-10 09:17:45.229171 | orchestrator |  "lv_name": "osd-block-5101bad7-da03-58be-8044-cbe4500fcec9", 2025-02-10 09:17:45.230114 | orchestrator |  "vg_name": "ceph-5101bad7-da03-58be-8044-cbe4500fcec9" 2025-02-10 09:17:45.230569 | orchestrator |  }, 2025-02-10 09:17:45.231373 | orchestrator |  { 2025-02-10 09:17:45.232137 | orchestrator |  "lv_name": "osd-block-d59ecc87-3940-56cd-881a-fbc914ec02de", 2025-02-10 09:17:45.233186 | orchestrator |  "vg_name": "ceph-d59ecc87-3940-56cd-881a-fbc914ec02de" 2025-02-10 09:17:45.233867 | orchestrator |  } 2025-02-10 09:17:45.234842 | orchestrator |  ], 2025-02-10 09:17:45.235888 | orchestrator |  "pv": [ 2025-02-10 09:17:45.236597 | orchestrator |  { 2025-02-10 09:17:45.237037 | orchestrator |  "pv_name": "/dev/sdb", 2025-02-10 09:17:45.238174 | orchestrator |  "vg_name": "ceph-5101bad7-da03-58be-8044-cbe4500fcec9" 2025-02-10 09:17:45.238297 | orchestrator |  }, 2025-02-10 09:17:45.238915 | orchestrator |  { 2025-02-10 09:17:45.239399 | orchestrator |  "pv_name": "/dev/sdc", 2025-02-10 09:17:45.239840 | orchestrator |  "vg_name": "ceph-d59ecc87-3940-56cd-881a-fbc914ec02de" 2025-02-10 09:17:45.240589 | orchestrator |  } 2025-02-10 09:17:45.240785 | orchestrator |  ] 2025-02-10 09:17:45.241441 | orchestrator |  } 2025-02-10 09:17:45.242233 | orchestrator | } 2025-02-10 09:17:45.242423 | orchestrator | 2025-02-10 09:17:45.243162 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-02-10 09:17:45.243543 | orchestrator | 2025-02-10 09:17:45.243932 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-02-10 09:17:45.245336 | orchestrator | Monday 10 February 2025 09:17:45 +0000 (0:00:00.901) 0:00:49.335 ******* 2025-02-10 09:17:45.904009 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-02-10 09:17:45.904610 | orchestrator | 2025-02-10 09:17:45.904647 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-02-10 09:17:45.905417 | orchestrator | Monday 10 February 2025 09:17:45 +0000 (0:00:00.676) 0:00:50.012 ******* 2025-02-10 09:17:46.154346 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:17:46.154899 | orchestrator | 2025-02-10 09:17:46.154950 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:17:46.155892 | orchestrator | Monday 10 February 2025 09:17:46 +0000 (0:00:00.248) 0:00:50.260 ******* 2025-02-10 09:17:46.667298 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-02-10 09:17:46.667595 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-02-10 09:17:46.668351 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-02-10 09:17:46.669419 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-02-10 09:17:46.669725 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-02-10 09:17:46.669889 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-02-10 09:17:46.670388 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-02-10 09:17:46.670822 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-02-10 09:17:46.671427 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-02-10 09:17:46.671793 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-02-10 09:17:46.672142 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-02-10 09:17:46.672658 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-02-10 09:17:46.672905 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-02-10 09:17:46.673205 | orchestrator | 2025-02-10 09:17:46.673761 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:17:46.673895 | orchestrator | Monday 10 February 2025 09:17:46 +0000 (0:00:00.512) 0:00:50.772 ******* 2025-02-10 09:17:46.889421 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:17:46.889877 | orchestrator | 2025-02-10 09:17:46.890581 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:17:46.893710 | orchestrator | Monday 10 February 2025 09:17:46 +0000 (0:00:00.226) 0:00:50.999 ******* 2025-02-10 09:17:47.103785 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:17:47.104027 | orchestrator | 2025-02-10 09:17:47.104712 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:17:47.105075 | orchestrator | Monday 10 February 2025 09:17:47 +0000 (0:00:00.215) 0:00:51.215 ******* 2025-02-10 09:17:47.318121 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:17:47.318425 | orchestrator | 2025-02-10 09:17:47.319848 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:17:47.322402 | orchestrator | Monday 10 February 2025 09:17:47 +0000 (0:00:00.213) 0:00:51.428 ******* 2025-02-10 09:17:47.528772 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:17:47.529799 | orchestrator | 2025-02-10 09:17:47.530000 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:17:47.530774 | orchestrator | Monday 10 February 2025 09:17:47 +0000 (0:00:00.210) 0:00:51.640 ******* 2025-02-10 09:17:47.739984 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:17:47.740471 | orchestrator | 2025-02-10 09:17:47.741282 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:17:47.741762 | orchestrator | Monday 10 February 2025 09:17:47 +0000 (0:00:00.210) 0:00:51.850 ******* 2025-02-10 09:17:47.980173 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:17:47.982618 | orchestrator | 2025-02-10 09:17:47.982676 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:17:48.184255 | orchestrator | Monday 10 February 2025 09:17:47 +0000 (0:00:00.238) 0:00:52.088 ******* 2025-02-10 09:17:48.184393 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:17:48.184845 | orchestrator | 2025-02-10 09:17:48.185202 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:17:48.185804 | orchestrator | Monday 10 February 2025 09:17:48 +0000 (0:00:00.207) 0:00:52.296 ******* 2025-02-10 09:17:48.906682 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:17:48.908660 | orchestrator | 2025-02-10 09:17:48.908823 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:17:48.909990 | orchestrator | Monday 10 February 2025 09:17:48 +0000 (0:00:00.721) 0:00:53.018 ******* 2025-02-10 09:17:49.382433 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7bb1a57e-a3aa-41a1-8378-2ba5a5124dde) 2025-02-10 09:17:49.384391 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7bb1a57e-a3aa-41a1-8378-2ba5a5124dde) 2025-02-10 09:17:49.849381 | orchestrator | 2025-02-10 09:17:49.849530 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:17:49.849542 | orchestrator | Monday 10 February 2025 09:17:49 +0000 (0:00:00.475) 0:00:53.493 ******* 2025-02-10 09:17:49.849562 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a31d8f91-c02a-4f65-9bd6-abd5e53b34f2) 2025-02-10 09:17:49.853653 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a31d8f91-c02a-4f65-9bd6-abd5e53b34f2) 2025-02-10 09:17:49.854133 | orchestrator | 2025-02-10 09:17:49.854929 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:17:49.855898 | orchestrator | Monday 10 February 2025 09:17:49 +0000 (0:00:00.464) 0:00:53.957 ******* 2025-02-10 09:17:50.331587 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_be832b54-23bf-4f17-8551-69f0e04b6625) 2025-02-10 09:17:50.332267 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_be832b54-23bf-4f17-8551-69f0e04b6625) 2025-02-10 09:17:50.332406 | orchestrator | 2025-02-10 09:17:50.333463 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:17:50.337100 | orchestrator | Monday 10 February 2025 09:17:50 +0000 (0:00:00.483) 0:00:54.441 ******* 2025-02-10 09:17:50.803553 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_809e68db-7594-4e4e-90c0-4a7ae6eb5d4d) 2025-02-10 09:17:50.805062 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_809e68db-7594-4e4e-90c0-4a7ae6eb5d4d) 2025-02-10 09:17:50.806100 | orchestrator | 2025-02-10 09:17:50.806141 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:17:51.173399 | orchestrator | Monday 10 February 2025 09:17:50 +0000 (0:00:00.473) 0:00:54.914 ******* 2025-02-10 09:17:51.173578 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-02-10 09:17:51.178720 | orchestrator | 2025-02-10 09:17:51.699049 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:51.700027 | orchestrator | Monday 10 February 2025 09:17:51 +0000 (0:00:00.369) 0:00:55.284 ******* 2025-02-10 09:17:51.700093 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-02-10 09:17:51.700338 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-02-10 09:17:51.701583 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-02-10 09:17:51.703618 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-02-10 09:17:51.703957 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-02-10 09:17:51.704907 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-02-10 09:17:51.707529 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-02-10 09:17:51.707645 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-02-10 09:17:51.708588 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-02-10 09:17:51.709152 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-02-10 09:17:51.709616 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-02-10 09:17:51.710088 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-02-10 09:17:51.710958 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-02-10 09:17:51.711379 | orchestrator | 2025-02-10 09:17:51.713947 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:51.714265 | orchestrator | Monday 10 February 2025 09:17:51 +0000 (0:00:00.526) 0:00:55.810 ******* 2025-02-10 09:17:51.920287 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:17:51.921605 | orchestrator | 2025-02-10 09:17:51.924893 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:52.127252 | orchestrator | Monday 10 February 2025 09:17:51 +0000 (0:00:00.220) 0:00:56.030 ******* 2025-02-10 09:17:52.127414 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:17:52.127963 | orchestrator | 2025-02-10 09:17:52.128851 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:52.130535 | orchestrator | Monday 10 February 2025 09:17:52 +0000 (0:00:00.205) 0:00:56.236 ******* 2025-02-10 09:17:52.641010 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:17:52.641699 | orchestrator | 2025-02-10 09:17:52.642913 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:52.647008 | orchestrator | Monday 10 February 2025 09:17:52 +0000 (0:00:00.515) 0:00:56.751 ******* 2025-02-10 09:17:52.842318 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:17:52.846127 | orchestrator | 2025-02-10 09:17:53.061659 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:53.061797 | orchestrator | Monday 10 February 2025 09:17:52 +0000 (0:00:00.200) 0:00:56.951 ******* 2025-02-10 09:17:53.061833 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:17:53.063797 | orchestrator | 2025-02-10 09:17:53.065025 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:53.065054 | orchestrator | Monday 10 February 2025 09:17:53 +0000 (0:00:00.218) 0:00:57.170 ******* 2025-02-10 09:17:53.290985 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:17:53.291730 | orchestrator | 2025-02-10 09:17:53.294716 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:53.497516 | orchestrator | Monday 10 February 2025 09:17:53 +0000 (0:00:00.230) 0:00:57.401 ******* 2025-02-10 09:17:53.497709 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:17:53.499236 | orchestrator | 2025-02-10 09:17:53.500463 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:53.502633 | orchestrator | Monday 10 February 2025 09:17:53 +0000 (0:00:00.206) 0:00:57.608 ******* 2025-02-10 09:17:53.703063 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:17:53.704232 | orchestrator | 2025-02-10 09:17:53.706696 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:54.435001 | orchestrator | Monday 10 February 2025 09:17:53 +0000 (0:00:00.204) 0:00:57.813 ******* 2025-02-10 09:17:54.435205 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-02-10 09:17:54.436945 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-02-10 09:17:54.439360 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-02-10 09:17:54.439454 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-02-10 09:17:54.440107 | orchestrator | 2025-02-10 09:17:54.440790 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:54.442201 | orchestrator | Monday 10 February 2025 09:17:54 +0000 (0:00:00.729) 0:00:58.543 ******* 2025-02-10 09:17:54.655425 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:17:54.655657 | orchestrator | 2025-02-10 09:17:54.656600 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:54.656927 | orchestrator | Monday 10 February 2025 09:17:54 +0000 (0:00:00.222) 0:00:58.765 ******* 2025-02-10 09:17:54.869111 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:17:54.869901 | orchestrator | 2025-02-10 09:17:54.871295 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:54.873560 | orchestrator | Monday 10 February 2025 09:17:54 +0000 (0:00:00.214) 0:00:58.980 ******* 2025-02-10 09:17:55.097881 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:17:55.098462 | orchestrator | 2025-02-10 09:17:55.099407 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:17:55.100471 | orchestrator | Monday 10 February 2025 09:17:55 +0000 (0:00:00.225) 0:00:59.206 ******* 2025-02-10 09:17:55.606795 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:17:55.607197 | orchestrator | 2025-02-10 09:17:55.607249 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-02-10 09:17:55.741199 | orchestrator | Monday 10 February 2025 09:17:55 +0000 (0:00:00.511) 0:00:59.717 ******* 2025-02-10 09:17:55.741378 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:17:55.741853 | orchestrator | 2025-02-10 09:17:55.741912 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-02-10 09:17:55.743459 | orchestrator | Monday 10 February 2025 09:17:55 +0000 (0:00:00.135) 0:00:59.853 ******* 2025-02-10 09:17:55.945880 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '89c58721-f175-5d0e-8750-3436c1d71ced'}}) 2025-02-10 09:17:55.945993 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '989340a3-ac62-57b3-a342-92d58018bc1c'}}) 2025-02-10 09:17:55.947208 | orchestrator | 2025-02-10 09:17:55.948302 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-02-10 09:17:55.949331 | orchestrator | Monday 10 February 2025 09:17:55 +0000 (0:00:00.202) 0:01:00.055 ******* 2025-02-10 09:17:57.581278 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-89c58721-f175-5d0e-8750-3436c1d71ced', 'data_vg': 'ceph-89c58721-f175-5d0e-8750-3436c1d71ced'}) 2025-02-10 09:17:57.582517 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-989340a3-ac62-57b3-a342-92d58018bc1c', 'data_vg': 'ceph-989340a3-ac62-57b3-a342-92d58018bc1c'}) 2025-02-10 09:17:57.582597 | orchestrator | 2025-02-10 09:17:57.582725 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-02-10 09:17:57.583265 | orchestrator | Monday 10 February 2025 09:17:57 +0000 (0:00:01.633) 0:01:01.689 ******* 2025-02-10 09:17:57.744332 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-89c58721-f175-5d0e-8750-3436c1d71ced', 'data_vg': 'ceph-89c58721-f175-5d0e-8750-3436c1d71ced'})  2025-02-10 09:17:57.745244 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-989340a3-ac62-57b3-a342-92d58018bc1c', 'data_vg': 'ceph-989340a3-ac62-57b3-a342-92d58018bc1c'})  2025-02-10 09:17:57.746827 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:17:57.747461 | orchestrator | 2025-02-10 09:17:57.747865 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-02-10 09:17:57.748340 | orchestrator | Monday 10 February 2025 09:17:57 +0000 (0:00:00.165) 0:01:01.855 ******* 2025-02-10 09:17:58.951090 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-89c58721-f175-5d0e-8750-3436c1d71ced', 'data_vg': 'ceph-89c58721-f175-5d0e-8750-3436c1d71ced'}) 2025-02-10 09:17:58.951657 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-989340a3-ac62-57b3-a342-92d58018bc1c', 'data_vg': 'ceph-989340a3-ac62-57b3-a342-92d58018bc1c'}) 2025-02-10 09:17:58.951740 | orchestrator | 2025-02-10 09:17:58.951813 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-02-10 09:17:58.952655 | orchestrator | Monday 10 February 2025 09:17:58 +0000 (0:00:01.202) 0:01:03.058 ******* 2025-02-10 09:17:59.122852 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-89c58721-f175-5d0e-8750-3436c1d71ced', 'data_vg': 'ceph-89c58721-f175-5d0e-8750-3436c1d71ced'})  2025-02-10 09:17:59.123785 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-989340a3-ac62-57b3-a342-92d58018bc1c', 'data_vg': 'ceph-989340a3-ac62-57b3-a342-92d58018bc1c'})  2025-02-10 09:17:59.124671 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:17:59.125735 | orchestrator | 2025-02-10 09:17:59.126510 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-02-10 09:17:59.127100 | orchestrator | Monday 10 February 2025 09:17:59 +0000 (0:00:00.176) 0:01:03.234 ******* 2025-02-10 09:17:59.313238 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:17:59.313970 | orchestrator | 2025-02-10 09:17:59.314057 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-02-10 09:17:59.315123 | orchestrator | Monday 10 February 2025 09:17:59 +0000 (0:00:00.189) 0:01:03.424 ******* 2025-02-10 09:17:59.499611 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-89c58721-f175-5d0e-8750-3436c1d71ced', 'data_vg': 'ceph-89c58721-f175-5d0e-8750-3436c1d71ced'})  2025-02-10 09:17:59.499821 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-989340a3-ac62-57b3-a342-92d58018bc1c', 'data_vg': 'ceph-989340a3-ac62-57b3-a342-92d58018bc1c'})  2025-02-10 09:17:59.499858 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:17:59.499996 | orchestrator | 2025-02-10 09:17:59.501067 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-02-10 09:17:59.501574 | orchestrator | Monday 10 February 2025 09:17:59 +0000 (0:00:00.186) 0:01:03.610 ******* 2025-02-10 09:17:59.779677 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:17:59.779881 | orchestrator | 2025-02-10 09:17:59.779912 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-02-10 09:17:59.932941 | orchestrator | Monday 10 February 2025 09:17:59 +0000 (0:00:00.279) 0:01:03.890 ******* 2025-02-10 09:17:59.933091 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-89c58721-f175-5d0e-8750-3436c1d71ced', 'data_vg': 'ceph-89c58721-f175-5d0e-8750-3436c1d71ced'})  2025-02-10 09:17:59.933610 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-989340a3-ac62-57b3-a342-92d58018bc1c', 'data_vg': 'ceph-989340a3-ac62-57b3-a342-92d58018bc1c'})  2025-02-10 09:17:59.935083 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:17:59.935186 | orchestrator | 2025-02-10 09:17:59.935209 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-02-10 09:17:59.935692 | orchestrator | Monday 10 February 2025 09:17:59 +0000 (0:00:00.154) 0:01:04.044 ******* 2025-02-10 09:18:00.068638 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:18:00.068869 | orchestrator | 2025-02-10 09:18:00.069186 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-02-10 09:18:00.069838 | orchestrator | Monday 10 February 2025 09:18:00 +0000 (0:00:00.134) 0:01:04.179 ******* 2025-02-10 09:18:00.220555 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-89c58721-f175-5d0e-8750-3436c1d71ced', 'data_vg': 'ceph-89c58721-f175-5d0e-8750-3436c1d71ced'})  2025-02-10 09:18:00.221364 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-989340a3-ac62-57b3-a342-92d58018bc1c', 'data_vg': 'ceph-989340a3-ac62-57b3-a342-92d58018bc1c'})  2025-02-10 09:18:00.222259 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:18:00.223262 | orchestrator | 2025-02-10 09:18:00.224296 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-02-10 09:18:00.224738 | orchestrator | Monday 10 February 2025 09:18:00 +0000 (0:00:00.153) 0:01:04.332 ******* 2025-02-10 09:18:00.359340 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:18:00.360277 | orchestrator | 2025-02-10 09:18:00.360690 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-02-10 09:18:00.361542 | orchestrator | Monday 10 February 2025 09:18:00 +0000 (0:00:00.138) 0:01:04.470 ******* 2025-02-10 09:18:00.518258 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-89c58721-f175-5d0e-8750-3436c1d71ced', 'data_vg': 'ceph-89c58721-f175-5d0e-8750-3436c1d71ced'})  2025-02-10 09:18:00.519369 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-989340a3-ac62-57b3-a342-92d58018bc1c', 'data_vg': 'ceph-989340a3-ac62-57b3-a342-92d58018bc1c'})  2025-02-10 09:18:00.521197 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:18:00.521882 | orchestrator | 2025-02-10 09:18:00.522640 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-02-10 09:18:00.523329 | orchestrator | Monday 10 February 2025 09:18:00 +0000 (0:00:00.158) 0:01:04.629 ******* 2025-02-10 09:18:00.667203 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-89c58721-f175-5d0e-8750-3436c1d71ced', 'data_vg': 'ceph-89c58721-f175-5d0e-8750-3436c1d71ced'})  2025-02-10 09:18:00.669834 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-989340a3-ac62-57b3-a342-92d58018bc1c', 'data_vg': 'ceph-989340a3-ac62-57b3-a342-92d58018bc1c'})  2025-02-10 09:18:00.669969 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:18:00.670557 | orchestrator | 2025-02-10 09:18:00.671088 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-02-10 09:18:00.671441 | orchestrator | Monday 10 February 2025 09:18:00 +0000 (0:00:00.148) 0:01:04.778 ******* 2025-02-10 09:18:00.824791 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-89c58721-f175-5d0e-8750-3436c1d71ced', 'data_vg': 'ceph-89c58721-f175-5d0e-8750-3436c1d71ced'})  2025-02-10 09:18:00.825565 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-989340a3-ac62-57b3-a342-92d58018bc1c', 'data_vg': 'ceph-989340a3-ac62-57b3-a342-92d58018bc1c'})  2025-02-10 09:18:00.825907 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:18:00.827099 | orchestrator | 2025-02-10 09:18:00.827529 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-02-10 09:18:00.828448 | orchestrator | Monday 10 February 2025 09:18:00 +0000 (0:00:00.159) 0:01:04.937 ******* 2025-02-10 09:18:00.952202 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:18:00.953182 | orchestrator | 2025-02-10 09:18:00.954088 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-02-10 09:18:00.954137 | orchestrator | Monday 10 February 2025 09:18:00 +0000 (0:00:00.125) 0:01:05.062 ******* 2025-02-10 09:18:01.086896 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:18:01.087426 | orchestrator | 2025-02-10 09:18:01.088454 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-02-10 09:18:01.089411 | orchestrator | Monday 10 February 2025 09:18:01 +0000 (0:00:00.136) 0:01:05.199 ******* 2025-02-10 09:18:01.218129 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:18:01.218848 | orchestrator | 2025-02-10 09:18:01.219031 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-02-10 09:18:01.221112 | orchestrator | Monday 10 February 2025 09:18:01 +0000 (0:00:00.130) 0:01:05.330 ******* 2025-02-10 09:18:01.356542 | orchestrator | ok: [testbed-node-5] => { 2025-02-10 09:18:01.357238 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-02-10 09:18:01.359935 | orchestrator | } 2025-02-10 09:18:01.360761 | orchestrator | 2025-02-10 09:18:01.361205 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-02-10 09:18:01.361735 | orchestrator | Monday 10 February 2025 09:18:01 +0000 (0:00:00.137) 0:01:05.467 ******* 2025-02-10 09:18:01.645421 | orchestrator | ok: [testbed-node-5] => { 2025-02-10 09:18:01.645668 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-02-10 09:18:01.646409 | orchestrator | } 2025-02-10 09:18:01.648903 | orchestrator | 2025-02-10 09:18:01.649329 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-02-10 09:18:01.649363 | orchestrator | Monday 10 February 2025 09:18:01 +0000 (0:00:00.289) 0:01:05.757 ******* 2025-02-10 09:18:01.799034 | orchestrator | ok: [testbed-node-5] => { 2025-02-10 09:18:01.799413 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-02-10 09:18:01.799456 | orchestrator | } 2025-02-10 09:18:01.800624 | orchestrator | 2025-02-10 09:18:01.801114 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-02-10 09:18:01.801861 | orchestrator | Monday 10 February 2025 09:18:01 +0000 (0:00:00.151) 0:01:05.908 ******* 2025-02-10 09:18:02.257249 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:18:02.257623 | orchestrator | 2025-02-10 09:18:02.257999 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-02-10 09:18:02.258433 | orchestrator | Monday 10 February 2025 09:18:02 +0000 (0:00:00.460) 0:01:06.368 ******* 2025-02-10 09:18:02.738006 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:18:02.738932 | orchestrator | 2025-02-10 09:18:02.739455 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-02-10 09:18:02.740799 | orchestrator | Monday 10 February 2025 09:18:02 +0000 (0:00:00.480) 0:01:06.848 ******* 2025-02-10 09:18:03.196710 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:18:03.196873 | orchestrator | 2025-02-10 09:18:03.197719 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-02-10 09:18:03.198155 | orchestrator | Monday 10 February 2025 09:18:03 +0000 (0:00:00.459) 0:01:07.308 ******* 2025-02-10 09:18:03.333799 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:18:03.455699 | orchestrator | 2025-02-10 09:18:03.455846 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-02-10 09:18:03.455861 | orchestrator | Monday 10 February 2025 09:18:03 +0000 (0:00:00.136) 0:01:07.444 ******* 2025-02-10 09:18:03.455891 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:18:03.566819 | orchestrator | 2025-02-10 09:18:03.566962 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-02-10 09:18:03.566982 | orchestrator | Monday 10 February 2025 09:18:03 +0000 (0:00:00.118) 0:01:07.563 ******* 2025-02-10 09:18:03.567041 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:18:03.568384 | orchestrator | 2025-02-10 09:18:03.568802 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-02-10 09:18:03.569851 | orchestrator | Monday 10 February 2025 09:18:03 +0000 (0:00:00.114) 0:01:07.677 ******* 2025-02-10 09:18:03.710731 | orchestrator | ok: [testbed-node-5] => { 2025-02-10 09:18:03.711315 | orchestrator |  "vgs_report": { 2025-02-10 09:18:03.711359 | orchestrator |  "vg": [] 2025-02-10 09:18:03.713027 | orchestrator |  } 2025-02-10 09:18:03.714076 | orchestrator | } 2025-02-10 09:18:03.714645 | orchestrator | 2025-02-10 09:18:03.714973 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-02-10 09:18:03.715605 | orchestrator | Monday 10 February 2025 09:18:03 +0000 (0:00:00.143) 0:01:07.820 ******* 2025-02-10 09:18:03.834984 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:18:03.835710 | orchestrator | 2025-02-10 09:18:03.837046 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-02-10 09:18:03.837674 | orchestrator | Monday 10 February 2025 09:18:03 +0000 (0:00:00.125) 0:01:07.945 ******* 2025-02-10 09:18:03.966132 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:18:03.966436 | orchestrator | 2025-02-10 09:18:03.967479 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-02-10 09:18:03.968249 | orchestrator | Monday 10 February 2025 09:18:03 +0000 (0:00:00.131) 0:01:08.077 ******* 2025-02-10 09:18:04.242299 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:18:04.242559 | orchestrator | 2025-02-10 09:18:04.243615 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-02-10 09:18:04.243966 | orchestrator | Monday 10 February 2025 09:18:04 +0000 (0:00:00.276) 0:01:08.353 ******* 2025-02-10 09:18:04.368450 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:18:04.369853 | orchestrator | 2025-02-10 09:18:04.370984 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-02-10 09:18:04.372691 | orchestrator | Monday 10 February 2025 09:18:04 +0000 (0:00:00.126) 0:01:08.480 ******* 2025-02-10 09:18:04.487343 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:18:04.488183 | orchestrator | 2025-02-10 09:18:04.489269 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-02-10 09:18:04.489973 | orchestrator | Monday 10 February 2025 09:18:04 +0000 (0:00:00.118) 0:01:08.598 ******* 2025-02-10 09:18:04.621434 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:18:04.622594 | orchestrator | 2025-02-10 09:18:04.623756 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-02-10 09:18:04.624558 | orchestrator | Monday 10 February 2025 09:18:04 +0000 (0:00:00.133) 0:01:08.732 ******* 2025-02-10 09:18:04.761445 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:18:04.762187 | orchestrator | 2025-02-10 09:18:04.763007 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-02-10 09:18:04.763591 | orchestrator | Monday 10 February 2025 09:18:04 +0000 (0:00:00.132) 0:01:08.865 ******* 2025-02-10 09:18:04.892118 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:18:04.893623 | orchestrator | 2025-02-10 09:18:04.893677 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-02-10 09:18:04.894261 | orchestrator | Monday 10 February 2025 09:18:04 +0000 (0:00:00.137) 0:01:09.003 ******* 2025-02-10 09:18:05.015759 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:18:05.016012 | orchestrator | 2025-02-10 09:18:05.016818 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-02-10 09:18:05.017082 | orchestrator | Monday 10 February 2025 09:18:05 +0000 (0:00:00.124) 0:01:09.128 ******* 2025-02-10 09:18:05.135867 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:18:05.136464 | orchestrator | 2025-02-10 09:18:05.137543 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-02-10 09:18:05.138072 | orchestrator | Monday 10 February 2025 09:18:05 +0000 (0:00:00.120) 0:01:09.248 ******* 2025-02-10 09:18:05.290964 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:18:05.291773 | orchestrator | 2025-02-10 09:18:05.292095 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-02-10 09:18:05.294235 | orchestrator | Monday 10 February 2025 09:18:05 +0000 (0:00:00.154) 0:01:09.403 ******* 2025-02-10 09:18:05.426353 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:18:05.426638 | orchestrator | 2025-02-10 09:18:05.428168 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-02-10 09:18:05.428679 | orchestrator | Monday 10 February 2025 09:18:05 +0000 (0:00:00.135) 0:01:09.538 ******* 2025-02-10 09:18:05.558774 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:18:05.559719 | orchestrator | 2025-02-10 09:18:05.560440 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-02-10 09:18:05.560991 | orchestrator | Monday 10 February 2025 09:18:05 +0000 (0:00:00.131) 0:01:09.670 ******* 2025-02-10 09:18:05.693230 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:18:05.693910 | orchestrator | 2025-02-10 09:18:05.694864 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-02-10 09:18:05.695977 | orchestrator | Monday 10 February 2025 09:18:05 +0000 (0:00:00.134) 0:01:09.805 ******* 2025-02-10 09:18:06.011747 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-89c58721-f175-5d0e-8750-3436c1d71ced', 'data_vg': 'ceph-89c58721-f175-5d0e-8750-3436c1d71ced'})  2025-02-10 09:18:06.012663 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-989340a3-ac62-57b3-a342-92d58018bc1c', 'data_vg': 'ceph-989340a3-ac62-57b3-a342-92d58018bc1c'})  2025-02-10 09:18:06.013534 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:18:06.014102 | orchestrator | 2025-02-10 09:18:06.014699 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-02-10 09:18:06.015315 | orchestrator | Monday 10 February 2025 09:18:06 +0000 (0:00:00.317) 0:01:10.122 ******* 2025-02-10 09:18:06.195897 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-89c58721-f175-5d0e-8750-3436c1d71ced', 'data_vg': 'ceph-89c58721-f175-5d0e-8750-3436c1d71ced'})  2025-02-10 09:18:06.196157 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-989340a3-ac62-57b3-a342-92d58018bc1c', 'data_vg': 'ceph-989340a3-ac62-57b3-a342-92d58018bc1c'})  2025-02-10 09:18:06.197160 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:18:06.199613 | orchestrator | 2025-02-10 09:18:06.200536 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-02-10 09:18:06.202090 | orchestrator | Monday 10 February 2025 09:18:06 +0000 (0:00:00.185) 0:01:10.307 ******* 2025-02-10 09:18:06.367833 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-89c58721-f175-5d0e-8750-3436c1d71ced', 'data_vg': 'ceph-89c58721-f175-5d0e-8750-3436c1d71ced'})  2025-02-10 09:18:06.367983 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-989340a3-ac62-57b3-a342-92d58018bc1c', 'data_vg': 'ceph-989340a3-ac62-57b3-a342-92d58018bc1c'})  2025-02-10 09:18:06.367997 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:18:06.368523 | orchestrator | 2025-02-10 09:18:06.368796 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-02-10 09:18:06.369193 | orchestrator | Monday 10 February 2025 09:18:06 +0000 (0:00:00.171) 0:01:10.479 ******* 2025-02-10 09:18:06.532050 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-89c58721-f175-5d0e-8750-3436c1d71ced', 'data_vg': 'ceph-89c58721-f175-5d0e-8750-3436c1d71ced'})  2025-02-10 09:18:06.534119 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-989340a3-ac62-57b3-a342-92d58018bc1c', 'data_vg': 'ceph-989340a3-ac62-57b3-a342-92d58018bc1c'})  2025-02-10 09:18:06.534867 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:18:06.534987 | orchestrator | 2025-02-10 09:18:06.535674 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-02-10 09:18:06.535750 | orchestrator | Monday 10 February 2025 09:18:06 +0000 (0:00:00.163) 0:01:10.642 ******* 2025-02-10 09:18:06.704217 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-89c58721-f175-5d0e-8750-3436c1d71ced', 'data_vg': 'ceph-89c58721-f175-5d0e-8750-3436c1d71ced'})  2025-02-10 09:18:06.704389 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-989340a3-ac62-57b3-a342-92d58018bc1c', 'data_vg': 'ceph-989340a3-ac62-57b3-a342-92d58018bc1c'})  2025-02-10 09:18:06.704410 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:18:06.704910 | orchestrator | 2025-02-10 09:18:06.705712 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-02-10 09:18:06.706301 | orchestrator | Monday 10 February 2025 09:18:06 +0000 (0:00:00.172) 0:01:10.815 ******* 2025-02-10 09:18:06.881640 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-89c58721-f175-5d0e-8750-3436c1d71ced', 'data_vg': 'ceph-89c58721-f175-5d0e-8750-3436c1d71ced'})  2025-02-10 09:18:06.883095 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-989340a3-ac62-57b3-a342-92d58018bc1c', 'data_vg': 'ceph-989340a3-ac62-57b3-a342-92d58018bc1c'})  2025-02-10 09:18:06.883681 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:18:06.884539 | orchestrator | 2025-02-10 09:18:06.884896 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-02-10 09:18:06.885801 | orchestrator | Monday 10 February 2025 09:18:06 +0000 (0:00:00.176) 0:01:10.992 ******* 2025-02-10 09:18:07.049678 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-89c58721-f175-5d0e-8750-3436c1d71ced', 'data_vg': 'ceph-89c58721-f175-5d0e-8750-3436c1d71ced'})  2025-02-10 09:18:07.049895 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-989340a3-ac62-57b3-a342-92d58018bc1c', 'data_vg': 'ceph-989340a3-ac62-57b3-a342-92d58018bc1c'})  2025-02-10 09:18:07.050606 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:18:07.051195 | orchestrator | 2025-02-10 09:18:07.052037 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-02-10 09:18:07.052749 | orchestrator | Monday 10 February 2025 09:18:07 +0000 (0:00:00.168) 0:01:11.161 ******* 2025-02-10 09:18:07.195347 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-89c58721-f175-5d0e-8750-3436c1d71ced', 'data_vg': 'ceph-89c58721-f175-5d0e-8750-3436c1d71ced'})  2025-02-10 09:18:07.195815 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-989340a3-ac62-57b3-a342-92d58018bc1c', 'data_vg': 'ceph-989340a3-ac62-57b3-a342-92d58018bc1c'})  2025-02-10 09:18:07.196908 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:18:07.197872 | orchestrator | 2025-02-10 09:18:07.198135 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-02-10 09:18:07.198905 | orchestrator | Monday 10 February 2025 09:18:07 +0000 (0:00:00.145) 0:01:11.307 ******* 2025-02-10 09:18:07.666210 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:18:07.666399 | orchestrator | 2025-02-10 09:18:07.666878 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-02-10 09:18:07.667895 | orchestrator | Monday 10 February 2025 09:18:07 +0000 (0:00:00.469) 0:01:11.776 ******* 2025-02-10 09:18:08.153883 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:18:08.154754 | orchestrator | 2025-02-10 09:18:08.155610 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-02-10 09:18:08.156101 | orchestrator | Monday 10 February 2025 09:18:08 +0000 (0:00:00.485) 0:01:12.262 ******* 2025-02-10 09:18:08.296926 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:18:08.297540 | orchestrator | 2025-02-10 09:18:08.297937 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-02-10 09:18:08.298443 | orchestrator | Monday 10 February 2025 09:18:08 +0000 (0:00:00.144) 0:01:12.407 ******* 2025-02-10 09:18:08.601070 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-89c58721-f175-5d0e-8750-3436c1d71ced', 'vg_name': 'ceph-89c58721-f175-5d0e-8750-3436c1d71ced'}) 2025-02-10 09:18:08.601704 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-989340a3-ac62-57b3-a342-92d58018bc1c', 'vg_name': 'ceph-989340a3-ac62-57b3-a342-92d58018bc1c'}) 2025-02-10 09:18:08.601753 | orchestrator | 2025-02-10 09:18:08.602130 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-02-10 09:18:08.602633 | orchestrator | Monday 10 February 2025 09:18:08 +0000 (0:00:00.302) 0:01:12.710 ******* 2025-02-10 09:18:08.754618 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-89c58721-f175-5d0e-8750-3436c1d71ced', 'data_vg': 'ceph-89c58721-f175-5d0e-8750-3436c1d71ced'})  2025-02-10 09:18:08.755585 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-989340a3-ac62-57b3-a342-92d58018bc1c', 'data_vg': 'ceph-989340a3-ac62-57b3-a342-92d58018bc1c'})  2025-02-10 09:18:08.756862 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:18:08.757312 | orchestrator | 2025-02-10 09:18:08.758102 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-02-10 09:18:08.758766 | orchestrator | Monday 10 February 2025 09:18:08 +0000 (0:00:00.155) 0:01:12.865 ******* 2025-02-10 09:18:08.914728 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-89c58721-f175-5d0e-8750-3436c1d71ced', 'data_vg': 'ceph-89c58721-f175-5d0e-8750-3436c1d71ced'})  2025-02-10 09:18:08.915415 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-989340a3-ac62-57b3-a342-92d58018bc1c', 'data_vg': 'ceph-989340a3-ac62-57b3-a342-92d58018bc1c'})  2025-02-10 09:18:08.916088 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:18:08.916818 | orchestrator | 2025-02-10 09:18:08.917411 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-02-10 09:18:08.917899 | orchestrator | Monday 10 February 2025 09:18:08 +0000 (0:00:00.160) 0:01:13.025 ******* 2025-02-10 09:18:09.087866 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-89c58721-f175-5d0e-8750-3436c1d71ced', 'data_vg': 'ceph-89c58721-f175-5d0e-8750-3436c1d71ced'})  2025-02-10 09:18:09.088521 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-989340a3-ac62-57b3-a342-92d58018bc1c', 'data_vg': 'ceph-989340a3-ac62-57b3-a342-92d58018bc1c'})  2025-02-10 09:18:09.088634 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:18:09.090790 | orchestrator | 2025-02-10 09:18:09.489008 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-02-10 09:18:09.489111 | orchestrator | Monday 10 February 2025 09:18:09 +0000 (0:00:00.173) 0:01:13.199 ******* 2025-02-10 09:18:09.489132 | orchestrator | ok: [testbed-node-5] => { 2025-02-10 09:18:09.489615 | orchestrator |  "lvm_report": { 2025-02-10 09:18:09.491370 | orchestrator |  "lv": [ 2025-02-10 09:18:09.492341 | orchestrator |  { 2025-02-10 09:18:09.493786 | orchestrator |  "lv_name": "osd-block-89c58721-f175-5d0e-8750-3436c1d71ced", 2025-02-10 09:18:09.494579 | orchestrator |  "vg_name": "ceph-89c58721-f175-5d0e-8750-3436c1d71ced" 2025-02-10 09:18:09.495421 | orchestrator |  }, 2025-02-10 09:18:09.496921 | orchestrator |  { 2025-02-10 09:18:09.497734 | orchestrator |  "lv_name": "osd-block-989340a3-ac62-57b3-a342-92d58018bc1c", 2025-02-10 09:18:09.497786 | orchestrator |  "vg_name": "ceph-989340a3-ac62-57b3-a342-92d58018bc1c" 2025-02-10 09:18:09.498140 | orchestrator |  } 2025-02-10 09:18:09.498847 | orchestrator |  ], 2025-02-10 09:18:09.499168 | orchestrator |  "pv": [ 2025-02-10 09:18:09.499823 | orchestrator |  { 2025-02-10 09:18:09.500354 | orchestrator |  "pv_name": "/dev/sdb", 2025-02-10 09:18:09.500964 | orchestrator |  "vg_name": "ceph-89c58721-f175-5d0e-8750-3436c1d71ced" 2025-02-10 09:18:09.501575 | orchestrator |  }, 2025-02-10 09:18:09.502215 | orchestrator |  { 2025-02-10 09:18:09.502922 | orchestrator |  "pv_name": "/dev/sdc", 2025-02-10 09:18:09.503861 | orchestrator |  "vg_name": "ceph-989340a3-ac62-57b3-a342-92d58018bc1c" 2025-02-10 09:18:09.504281 | orchestrator |  } 2025-02-10 09:18:09.504965 | orchestrator |  ] 2025-02-10 09:18:09.505721 | orchestrator |  } 2025-02-10 09:18:09.506267 | orchestrator | } 2025-02-10 09:18:09.506941 | orchestrator | 2025-02-10 09:18:09.507680 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:18:09.508299 | orchestrator | 2025-02-10 09:18:09 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:18:09.508593 | orchestrator | 2025-02-10 09:18:09 | INFO  | Please wait and do not abort execution. 2025-02-10 09:18:09.509205 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-02-10 09:18:09.510061 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-02-10 09:18:09.511225 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-02-10 09:18:09.511760 | orchestrator | 2025-02-10 09:18:09.512748 | orchestrator | 2025-02-10 09:18:09.513821 | orchestrator | 2025-02-10 09:18:09.515132 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:18:09.515416 | orchestrator | Monday 10 February 2025 09:18:09 +0000 (0:00:00.400) 0:01:13.600 ******* 2025-02-10 09:18:09.516862 | orchestrator | =============================================================================== 2025-02-10 09:18:09.517340 | orchestrator | Create block VGs -------------------------------------------------------- 5.20s 2025-02-10 09:18:09.518600 | orchestrator | Create block LVs -------------------------------------------------------- 3.62s 2025-02-10 09:18:09.518823 | orchestrator | Print LVM report data --------------------------------------------------- 1.96s 2025-02-10 09:18:09.520139 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.75s 2025-02-10 09:18:09.520376 | orchestrator | Add known links to the list of available block devices ------------------ 1.69s 2025-02-10 09:18:09.520790 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.55s 2025-02-10 09:18:09.521201 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.50s 2025-02-10 09:18:09.521816 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.50s 2025-02-10 09:18:09.522006 | orchestrator | Add known partitions to the list of available block devices ------------- 1.46s 2025-02-10 09:18:09.523355 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.43s 2025-02-10 09:18:09.524456 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.15s 2025-02-10 09:18:09.525205 | orchestrator | Get initial list of available block devices ----------------------------- 0.98s 2025-02-10 09:18:09.526209 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.89s 2025-02-10 09:18:09.527550 | orchestrator | Add known links to the list of available block devices ------------------ 0.80s 2025-02-10 09:18:09.528805 | orchestrator | Add known links to the list of available block devices ------------------ 0.78s 2025-02-10 09:18:09.529541 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.73s 2025-02-10 09:18:09.530141 | orchestrator | Add known partitions to the list of available block devices ------------- 0.73s 2025-02-10 09:18:09.530964 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2025-02-10 09:18:09.533572 | orchestrator | Print size needed for LVs on ceph_wal_devices --------------------------- 0.69s 2025-02-10 09:18:09.533712 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2025-02-10 09:18:11.449472 | orchestrator | 2025-02-10 09:18:11 | INFO  | Task 42689bc7-57dc-466c-8368-b2e6e7e2991f (facts) was prepared for execution. 2025-02-10 09:18:11.449810 | orchestrator | 2025-02-10 09:18:11 | INFO  | It takes a moment until task 42689bc7-57dc-466c-8368-b2e6e7e2991f (facts) has been started and output is visible here. 2025-02-10 09:18:14.768313 | orchestrator | 2025-02-10 09:18:14.768485 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-02-10 09:18:14.771625 | orchestrator | 2025-02-10 09:18:14.771691 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-02-10 09:18:14.771710 | orchestrator | Monday 10 February 2025 09:18:14 +0000 (0:00:00.204) 0:00:00.204 ******* 2025-02-10 09:18:15.755491 | orchestrator | ok: [testbed-manager] 2025-02-10 09:18:15.755731 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:18:15.756390 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:18:15.757316 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:18:15.757917 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:18:15.758462 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:18:15.761011 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:18:15.761391 | orchestrator | 2025-02-10 09:18:15.761886 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-02-10 09:18:15.762257 | orchestrator | Monday 10 February 2025 09:18:15 +0000 (0:00:00.987) 0:00:01.191 ******* 2025-02-10 09:18:15.903245 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:18:15.981228 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:18:16.057630 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:18:16.155227 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:18:16.242696 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:18:16.910913 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:18:16.911664 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:18:16.911740 | orchestrator | 2025-02-10 09:18:16.912424 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-02-10 09:18:16.914097 | orchestrator | 2025-02-10 09:18:16.914477 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-02-10 09:18:16.917686 | orchestrator | Monday 10 February 2025 09:18:16 +0000 (0:00:01.154) 0:00:02.346 ******* 2025-02-10 09:18:21.199246 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:18:21.199978 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:18:21.200612 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:18:21.200640 | orchestrator | ok: [testbed-manager] 2025-02-10 09:18:21.200658 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:18:21.201432 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:18:21.201860 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:18:21.202669 | orchestrator | 2025-02-10 09:18:21.203071 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-02-10 09:18:21.203478 | orchestrator | 2025-02-10 09:18:21.203929 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-02-10 09:18:21.204654 | orchestrator | Monday 10 February 2025 09:18:21 +0000 (0:00:04.290) 0:00:06.637 ******* 2025-02-10 09:18:21.474139 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:18:21.545827 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:18:21.614553 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:18:21.686117 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:18:21.769179 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:18:21.818249 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:18:21.818954 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:18:21.819950 | orchestrator | 2025-02-10 09:18:21.821093 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:18:21.821244 | orchestrator | 2025-02-10 09:18:21 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:18:21.821312 | orchestrator | 2025-02-10 09:18:21 | INFO  | Please wait and do not abort execution. 2025-02-10 09:18:21.822169 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:18:21.822852 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:18:21.823337 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:18:21.823659 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:18:21.824109 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:18:21.824817 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:18:21.825535 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:18:21.826225 | orchestrator | 2025-02-10 09:18:21.826680 | orchestrator | Monday 10 February 2025 09:18:21 +0000 (0:00:00.619) 0:00:07.257 ******* 2025-02-10 09:18:21.827016 | orchestrator | =============================================================================== 2025-02-10 09:18:21.827653 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.29s 2025-02-10 09:18:21.828025 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.15s 2025-02-10 09:18:21.828544 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.99s 2025-02-10 09:18:21.828866 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.62s 2025-02-10 09:18:22.253312 | orchestrator | 2025-02-10 09:18:22.256529 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Mon Feb 10 09:18:22 UTC 2025 2025-02-10 09:18:23.835610 | orchestrator | 2025-02-10 09:18:23.835771 | orchestrator | 2025-02-10 09:18:23 | INFO  | Collection nutshell is prepared for execution 2025-02-10 09:18:23.839486 | orchestrator | 2025-02-10 09:18:23 | INFO  | D [0] - dotfiles 2025-02-10 09:18:23.839574 | orchestrator | 2025-02-10 09:18:23 | INFO  | D [0] - homer 2025-02-10 09:18:23.840772 | orchestrator | 2025-02-10 09:18:23 | INFO  | D [0] - netdata 2025-02-10 09:18:23.840871 | orchestrator | 2025-02-10 09:18:23 | INFO  | D [0] - openstackclient 2025-02-10 09:18:23.840888 | orchestrator | 2025-02-10 09:18:23 | INFO  | D [0] - phpmyadmin 2025-02-10 09:18:23.840902 | orchestrator | 2025-02-10 09:18:23 | INFO  | A [0] - common 2025-02-10 09:18:23.840922 | orchestrator | 2025-02-10 09:18:23 | INFO  | A [1] -- loadbalancer 2025-02-10 09:18:23.841013 | orchestrator | 2025-02-10 09:18:23 | INFO  | D [2] --- opensearch 2025-02-10 09:18:23.841557 | orchestrator | 2025-02-10 09:18:23 | INFO  | A [2] --- mariadb-ng 2025-02-10 09:18:23.841584 | orchestrator | 2025-02-10 09:18:23 | INFO  | D [3] ---- horizon 2025-02-10 09:18:23.841603 | orchestrator | 2025-02-10 09:18:23 | INFO  | A [3] ---- keystone 2025-02-10 09:18:23.841891 | orchestrator | 2025-02-10 09:18:23 | INFO  | A [4] ----- neutron 2025-02-10 09:18:23.841919 | orchestrator | 2025-02-10 09:18:23 | INFO  | D [5] ------ wait-for-nova 2025-02-10 09:18:23.841935 | orchestrator | 2025-02-10 09:18:23 | INFO  | A [5] ------ octavia 2025-02-10 09:18:23.841984 | orchestrator | 2025-02-10 09:18:23 | INFO  | D [4] ----- barbican 2025-02-10 09:18:23.842094 | orchestrator | 2025-02-10 09:18:23 | INFO  | D [4] ----- designate 2025-02-10 09:18:23.842120 | orchestrator | 2025-02-10 09:18:23 | INFO  | D [4] ----- ironic 2025-02-10 09:18:23.842444 | orchestrator | 2025-02-10 09:18:23 | INFO  | D [4] ----- placement 2025-02-10 09:18:23.842498 | orchestrator | 2025-02-10 09:18:23 | INFO  | D [4] ----- magnum 2025-02-10 09:18:23.842566 | orchestrator | 2025-02-10 09:18:23 | INFO  | A [1] -- openvswitch 2025-02-10 09:18:23.842645 | orchestrator | 2025-02-10 09:18:23 | INFO  | D [2] --- ovn 2025-02-10 09:18:23.842917 | orchestrator | 2025-02-10 09:18:23 | INFO  | D [1] -- memcached 2025-02-10 09:18:23.843073 | orchestrator | 2025-02-10 09:18:23 | INFO  | D [1] -- redis 2025-02-10 09:18:23.843111 | orchestrator | 2025-02-10 09:18:23 | INFO  | D [1] -- rabbitmq-ng 2025-02-10 09:18:23.843423 | orchestrator | 2025-02-10 09:18:23 | INFO  | A [0] - kubernetes 2025-02-10 09:18:23.843461 | orchestrator | 2025-02-10 09:18:23 | INFO  | D [1] -- kubeconfig 2025-02-10 09:18:23.843633 | orchestrator | 2025-02-10 09:18:23 | INFO  | A [1] -- copy-kubeconfig 2025-02-10 09:18:23.843663 | orchestrator | 2025-02-10 09:18:23 | INFO  | A [0] - ceph 2025-02-10 09:18:23.845253 | orchestrator | 2025-02-10 09:18:23 | INFO  | A [1] -- ceph-pools 2025-02-10 09:18:23.845637 | orchestrator | 2025-02-10 09:18:23 | INFO  | A [2] --- copy-ceph-keys 2025-02-10 09:18:23.845664 | orchestrator | 2025-02-10 09:18:23 | INFO  | A [3] ---- cephclient 2025-02-10 09:18:23.845679 | orchestrator | 2025-02-10 09:18:23 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-02-10 09:18:23.845693 | orchestrator | 2025-02-10 09:18:23 | INFO  | A [4] ----- wait-for-keystone 2025-02-10 09:18:23.845741 | orchestrator | 2025-02-10 09:18:23 | INFO  | D [5] ------ kolla-ceph-rgw 2025-02-10 09:18:23.845755 | orchestrator | 2025-02-10 09:18:23 | INFO  | D [5] ------ glance 2025-02-10 09:18:23.845769 | orchestrator | 2025-02-10 09:18:23 | INFO  | D [5] ------ cinder 2025-02-10 09:18:23.845783 | orchestrator | 2025-02-10 09:18:23 | INFO  | D [5] ------ nova 2025-02-10 09:18:23.845802 | orchestrator | 2025-02-10 09:18:23 | INFO  | A [4] ----- prometheus 2025-02-10 09:18:23.988105 | orchestrator | 2025-02-10 09:18:23 | INFO  | D [5] ------ grafana 2025-02-10 09:18:23.988230 | orchestrator | 2025-02-10 09:18:23 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-02-10 09:18:25.909680 | orchestrator | 2025-02-10 09:18:23 | INFO  | Tasks are running in the background 2025-02-10 09:18:25.909844 | orchestrator | 2025-02-10 09:18:25 | INFO  | No task IDs specified, wait for all currently running tasks 2025-02-10 09:18:28.033807 | orchestrator | 2025-02-10 09:18:28 | INFO  | Task ef2a83ec-1ad8-42ec-bda2-1521c5088a5d is in state STARTED 2025-02-10 09:18:28.034276 | orchestrator | 2025-02-10 09:18:28 | INFO  | Task c42d286d-894d-4c21-a04e-c3eb9c6fb3d3 is in state STARTED 2025-02-10 09:18:28.034318 | orchestrator | 2025-02-10 09:18:28 | INFO  | Task c0459b89-ef67-4d4c-ac71-3bba9ff494ef is in state STARTED 2025-02-10 09:18:28.034334 | orchestrator | 2025-02-10 09:18:28 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:18:28.034350 | orchestrator | 2025-02-10 09:18:28 | INFO  | Task 8b59ac64-1f2b-46a9-9137-c1b84d6b522b is in state STARTED 2025-02-10 09:18:28.034375 | orchestrator | 2025-02-10 09:18:28 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:18:31.079264 | orchestrator | 2025-02-10 09:18:28 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:18:31.080131 | orchestrator | 2025-02-10 09:18:31 | INFO  | Task ef2a83ec-1ad8-42ec-bda2-1521c5088a5d is in state STARTED 2025-02-10 09:18:31.083057 | orchestrator | 2025-02-10 09:18:31 | INFO  | Task c42d286d-894d-4c21-a04e-c3eb9c6fb3d3 is in state STARTED 2025-02-10 09:18:31.086125 | orchestrator | 2025-02-10 09:18:31 | INFO  | Task c0459b89-ef67-4d4c-ac71-3bba9ff494ef is in state STARTED 2025-02-10 09:18:31.086596 | orchestrator | 2025-02-10 09:18:31 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:18:31.087382 | orchestrator | 2025-02-10 09:18:31 | INFO  | Task 8b59ac64-1f2b-46a9-9137-c1b84d6b522b is in state STARTED 2025-02-10 09:18:31.088239 | orchestrator | 2025-02-10 09:18:31 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:18:31.088323 | orchestrator | 2025-02-10 09:18:31 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:18:34.137120 | orchestrator | 2025-02-10 09:18:34 | INFO  | Task ef2a83ec-1ad8-42ec-bda2-1521c5088a5d is in state STARTED 2025-02-10 09:18:34.139937 | orchestrator | 2025-02-10 09:18:34 | INFO  | Task c42d286d-894d-4c21-a04e-c3eb9c6fb3d3 is in state STARTED 2025-02-10 09:18:34.139993 | orchestrator | 2025-02-10 09:18:34 | INFO  | Task c0459b89-ef67-4d4c-ac71-3bba9ff494ef is in state STARTED 2025-02-10 09:18:34.140705 | orchestrator | 2025-02-10 09:18:34 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:18:34.141099 | orchestrator | 2025-02-10 09:18:34 | INFO  | Task 8b59ac64-1f2b-46a9-9137-c1b84d6b522b is in state STARTED 2025-02-10 09:18:34.141133 | orchestrator | 2025-02-10 09:18:34 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:18:37.184386 | orchestrator | 2025-02-10 09:18:34 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:18:37.184647 | orchestrator | 2025-02-10 09:18:37 | INFO  | Task ef2a83ec-1ad8-42ec-bda2-1521c5088a5d is in state STARTED 2025-02-10 09:18:37.186451 | orchestrator | 2025-02-10 09:18:37 | INFO  | Task c42d286d-894d-4c21-a04e-c3eb9c6fb3d3 is in state STARTED 2025-02-10 09:18:37.186496 | orchestrator | 2025-02-10 09:18:37 | INFO  | Task c0459b89-ef67-4d4c-ac71-3bba9ff494ef is in state STARTED 2025-02-10 09:18:37.186781 | orchestrator | 2025-02-10 09:18:37 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:18:37.186816 | orchestrator | 2025-02-10 09:18:37 | INFO  | Task 8b59ac64-1f2b-46a9-9137-c1b84d6b522b is in state STARTED 2025-02-10 09:18:37.188018 | orchestrator | 2025-02-10 09:18:37 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:18:37.188094 | orchestrator | 2025-02-10 09:18:37 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:18:40.237240 | orchestrator | 2025-02-10 09:18:40 | INFO  | Task ef2a83ec-1ad8-42ec-bda2-1521c5088a5d is in state STARTED 2025-02-10 09:18:40.238825 | orchestrator | 2025-02-10 09:18:40 | INFO  | Task c42d286d-894d-4c21-a04e-c3eb9c6fb3d3 is in state STARTED 2025-02-10 09:18:40.238945 | orchestrator | 2025-02-10 09:18:40 | INFO  | Task c0459b89-ef67-4d4c-ac71-3bba9ff494ef is in state STARTED 2025-02-10 09:18:40.239027 | orchestrator | 2025-02-10 09:18:40 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:18:40.240147 | orchestrator | 2025-02-10 09:18:40 | INFO  | Task 8b59ac64-1f2b-46a9-9137-c1b84d6b522b is in state STARTED 2025-02-10 09:18:40.240958 | orchestrator | 2025-02-10 09:18:40 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:18:43.311193 | orchestrator | 2025-02-10 09:18:40 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:18:43.311378 | orchestrator | 2025-02-10 09:18:43 | INFO  | Task ef2a83ec-1ad8-42ec-bda2-1521c5088a5d is in state STARTED 2025-02-10 09:18:43.312690 | orchestrator | 2025-02-10 09:18:43 | INFO  | Task c42d286d-894d-4c21-a04e-c3eb9c6fb3d3 is in state STARTED 2025-02-10 09:18:43.313641 | orchestrator | 2025-02-10 09:18:43 | INFO  | Task c0459b89-ef67-4d4c-ac71-3bba9ff494ef is in state STARTED 2025-02-10 09:18:43.315881 | orchestrator | 2025-02-10 09:18:43 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:18:43.315922 | orchestrator | 2025-02-10 09:18:43 | INFO  | Task 8b59ac64-1f2b-46a9-9137-c1b84d6b522b is in state STARTED 2025-02-10 09:18:43.316381 | orchestrator | 2025-02-10 09:18:43 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:18:46.390292 | orchestrator | 2025-02-10 09:18:43 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:18:46.390457 | orchestrator | 2025-02-10 09:18:46 | INFO  | Task ef2a83ec-1ad8-42ec-bda2-1521c5088a5d is in state STARTED 2025-02-10 09:18:46.394427 | orchestrator | 2025-02-10 09:18:46 | INFO  | Task c42d286d-894d-4c21-a04e-c3eb9c6fb3d3 is in state STARTED 2025-02-10 09:18:46.394562 | orchestrator | 2025-02-10 09:18:46 | INFO  | Task c0459b89-ef67-4d4c-ac71-3bba9ff494ef is in state STARTED 2025-02-10 09:18:46.395110 | orchestrator | 2025-02-10 09:18:46 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:18:46.397180 | orchestrator | 2025-02-10 09:18:46.397224 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-02-10 09:18:46.397237 | orchestrator | 2025-02-10 09:18:46.397249 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-02-10 09:18:46.397260 | orchestrator | Monday 10 February 2025 09:18:31 +0000 (0:00:00.396) 0:00:00.396 ******* 2025-02-10 09:18:46.397294 | orchestrator | changed: [testbed-manager] 2025-02-10 09:18:46.397307 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:18:46.397318 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:18:46.397330 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:18:46.397340 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:18:46.397351 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:18:46.397362 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:18:46.397372 | orchestrator | 2025-02-10 09:18:46.397383 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-02-10 09:18:46.397401 | orchestrator | Monday 10 February 2025 09:18:34 +0000 (0:00:03.631) 0:00:04.028 ******* 2025-02-10 09:18:46.397413 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-02-10 09:18:46.397431 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-02-10 09:18:46.397442 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-02-10 09:18:46.397452 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-02-10 09:18:46.397463 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-02-10 09:18:46.397474 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-02-10 09:18:46.397484 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-02-10 09:18:46.397495 | orchestrator | 2025-02-10 09:18:46.397505 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-02-10 09:18:46.397571 | orchestrator | Monday 10 February 2025 09:18:36 +0000 (0:00:02.066) 0:00:06.095 ******* 2025-02-10 09:18:46.397587 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-02-10 09:18:35.542567', 'end': '2025-02-10 09:18:35.549182', 'delta': '0:00:00.006615', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-02-10 09:18:46.397606 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-02-10 09:18:35.589919', 'end': '2025-02-10 09:18:35.597613', 'delta': '0:00:00.007694', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-02-10 09:18:46.397617 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-02-10 09:18:35.568573', 'end': '2025-02-10 09:18:35.573738', 'delta': '0:00:00.005165', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-02-10 09:18:46.397653 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-02-10 09:18:35.920105', 'end': '2025-02-10 09:18:35.927921', 'delta': '0:00:00.007816', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-02-10 09:18:46.397665 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-02-10 09:18:36.164717', 'end': '2025-02-10 09:18:36.172040', 'delta': '0:00:00.007323', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-02-10 09:18:46.397675 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-02-10 09:18:36.474120', 'end': '2025-02-10 09:18:36.481541', 'delta': '0:00:00.007421', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-02-10 09:18:46.397690 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-02-10 09:18:36.741859', 'end': '2025-02-10 09:18:36.750011', 'delta': '0:00:00.008152', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-02-10 09:18:46.397701 | orchestrator | 2025-02-10 09:18:46.397711 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-02-10 09:18:46.397721 | orchestrator | Monday 10 February 2025 09:18:39 +0000 (0:00:02.652) 0:00:08.748 ******* 2025-02-10 09:18:46.397731 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-02-10 09:18:46.397742 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-02-10 09:18:46.397752 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-02-10 09:18:46.397762 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-02-10 09:18:46.397771 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-02-10 09:18:46.397788 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-02-10 09:18:46.397799 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-02-10 09:18:46.397811 | orchestrator | 2025-02-10 09:18:46.397822 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:18:46.397834 | orchestrator | testbed-manager : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:18:46.397848 | orchestrator | testbed-node-0 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:18:46.397860 | orchestrator | testbed-node-1 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:18:46.397876 | orchestrator | testbed-node-2 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:18:46.397905 | orchestrator | testbed-node-3 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:18:46.397917 | orchestrator | testbed-node-4 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:18:46.397929 | orchestrator | testbed-node-5 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:18:46.397940 | orchestrator | 2025-02-10 09:18:46.397951 | orchestrator | Monday 10 February 2025 09:18:43 +0000 (0:00:03.533) 0:00:12.282 ******* 2025-02-10 09:18:46.397963 | orchestrator | =============================================================================== 2025-02-10 09:18:46.397974 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.63s 2025-02-10 09:18:46.397986 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.53s 2025-02-10 09:18:46.397997 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.65s 2025-02-10 09:18:46.398009 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.07s 2025-02-10 09:18:46.398066 | orchestrator | 2025-02-10 09:18:46 | INFO  | Task 8b59ac64-1f2b-46a9-9137-c1b84d6b522b is in state SUCCESS 2025-02-10 09:18:46.398135 | orchestrator | 2025-02-10 09:18:46 | INFO  | Task 795feb5b-bcd9-439b-8a0e-5ab7a71567b8 is in state STARTED 2025-02-10 09:18:46.399240 | orchestrator | 2025-02-10 09:18:46 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:18:49.472716 | orchestrator | 2025-02-10 09:18:46 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:18:49.472885 | orchestrator | 2025-02-10 09:18:49 | INFO  | Task ef2a83ec-1ad8-42ec-bda2-1521c5088a5d is in state STARTED 2025-02-10 09:18:49.476757 | orchestrator | 2025-02-10 09:18:49 | INFO  | Task c42d286d-894d-4c21-a04e-c3eb9c6fb3d3 is in state STARTED 2025-02-10 09:18:49.477734 | orchestrator | 2025-02-10 09:18:49 | INFO  | Task c0459b89-ef67-4d4c-ac71-3bba9ff494ef is in state STARTED 2025-02-10 09:18:49.477769 | orchestrator | 2025-02-10 09:18:49 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:18:49.477790 | orchestrator | 2025-02-10 09:18:49 | INFO  | Task 795feb5b-bcd9-439b-8a0e-5ab7a71567b8 is in state STARTED 2025-02-10 09:18:49.478839 | orchestrator | 2025-02-10 09:18:49 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:18:52.594721 | orchestrator | 2025-02-10 09:18:49 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:18:52.594835 | orchestrator | 2025-02-10 09:18:52 | INFO  | Task ef2a83ec-1ad8-42ec-bda2-1521c5088a5d is in state STARTED 2025-02-10 09:18:52.597744 | orchestrator | 2025-02-10 09:18:52 | INFO  | Task c42d286d-894d-4c21-a04e-c3eb9c6fb3d3 is in state STARTED 2025-02-10 09:18:52.598577 | orchestrator | 2025-02-10 09:18:52 | INFO  | Task c0459b89-ef67-4d4c-ac71-3bba9ff494ef is in state STARTED 2025-02-10 09:18:52.599350 | orchestrator | 2025-02-10 09:18:52 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:18:52.599810 | orchestrator | 2025-02-10 09:18:52 | INFO  | Task 795feb5b-bcd9-439b-8a0e-5ab7a71567b8 is in state STARTED 2025-02-10 09:18:52.601324 | orchestrator | 2025-02-10 09:18:52 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:18:52.601649 | orchestrator | 2025-02-10 09:18:52 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:18:55.684374 | orchestrator | 2025-02-10 09:18:55 | INFO  | Task ef2a83ec-1ad8-42ec-bda2-1521c5088a5d is in state STARTED 2025-02-10 09:18:55.690399 | orchestrator | 2025-02-10 09:18:55 | INFO  | Task c42d286d-894d-4c21-a04e-c3eb9c6fb3d3 is in state STARTED 2025-02-10 09:18:55.692666 | orchestrator | 2025-02-10 09:18:55 | INFO  | Task c0459b89-ef67-4d4c-ac71-3bba9ff494ef is in state STARTED 2025-02-10 09:18:55.699081 | orchestrator | 2025-02-10 09:18:55 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:18:55.701446 | orchestrator | 2025-02-10 09:18:55 | INFO  | Task 795feb5b-bcd9-439b-8a0e-5ab7a71567b8 is in state STARTED 2025-02-10 09:18:55.707237 | orchestrator | 2025-02-10 09:18:55 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:18:58.813980 | orchestrator | 2025-02-10 09:18:55 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:18:58.814187 | orchestrator | 2025-02-10 09:18:58 | INFO  | Task ef2a83ec-1ad8-42ec-bda2-1521c5088a5d is in state STARTED 2025-02-10 09:18:58.817908 | orchestrator | 2025-02-10 09:18:58 | INFO  | Task c42d286d-894d-4c21-a04e-c3eb9c6fb3d3 is in state STARTED 2025-02-10 09:18:58.823819 | orchestrator | 2025-02-10 09:18:58 | INFO  | Task c0459b89-ef67-4d4c-ac71-3bba9ff494ef is in state STARTED 2025-02-10 09:18:58.825432 | orchestrator | 2025-02-10 09:18:58 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:18:58.827410 | orchestrator | 2025-02-10 09:18:58 | INFO  | Task 795feb5b-bcd9-439b-8a0e-5ab7a71567b8 is in state STARTED 2025-02-10 09:18:58.829211 | orchestrator | 2025-02-10 09:18:58 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:18:58.830560 | orchestrator | 2025-02-10 09:18:58 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:19:01.944083 | orchestrator | 2025-02-10 09:19:01 | INFO  | Task ef2a83ec-1ad8-42ec-bda2-1521c5088a5d is in state STARTED 2025-02-10 09:19:05.003418 | orchestrator | 2025-02-10 09:19:01 | INFO  | Task c42d286d-894d-4c21-a04e-c3eb9c6fb3d3 is in state STARTED 2025-02-10 09:19:05.003657 | orchestrator | 2025-02-10 09:19:01 | INFO  | Task c0459b89-ef67-4d4c-ac71-3bba9ff494ef is in state STARTED 2025-02-10 09:19:05.003697 | orchestrator | 2025-02-10 09:19:01 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:19:05.003722 | orchestrator | 2025-02-10 09:19:01 | INFO  | Task 795feb5b-bcd9-439b-8a0e-5ab7a71567b8 is in state STARTED 2025-02-10 09:19:05.003747 | orchestrator | 2025-02-10 09:19:01 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:19:05.003771 | orchestrator | 2025-02-10 09:19:01 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:19:05.003822 | orchestrator | 2025-02-10 09:19:05 | INFO  | Task ef2a83ec-1ad8-42ec-bda2-1521c5088a5d is in state STARTED 2025-02-10 09:19:05.005483 | orchestrator | 2025-02-10 09:19:05 | INFO  | Task c42d286d-894d-4c21-a04e-c3eb9c6fb3d3 is in state STARTED 2025-02-10 09:19:05.008069 | orchestrator | 2025-02-10 09:19:05 | INFO  | Task c0459b89-ef67-4d4c-ac71-3bba9ff494ef is in state STARTED 2025-02-10 09:19:05.008139 | orchestrator | 2025-02-10 09:19:05 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:19:05.009940 | orchestrator | 2025-02-10 09:19:05 | INFO  | Task 795feb5b-bcd9-439b-8a0e-5ab7a71567b8 is in state STARTED 2025-02-10 09:19:05.011730 | orchestrator | 2025-02-10 09:19:05 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:19:08.095171 | orchestrator | 2025-02-10 09:19:05 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:19:08.095338 | orchestrator | 2025-02-10 09:19:08 | INFO  | Task ef2a83ec-1ad8-42ec-bda2-1521c5088a5d is in state STARTED 2025-02-10 09:19:08.106125 | orchestrator | 2025-02-10 09:19:08 | INFO  | Task c42d286d-894d-4c21-a04e-c3eb9c6fb3d3 is in state STARTED 2025-02-10 09:19:08.106200 | orchestrator | 2025-02-10 09:19:08 | INFO  | Task c0459b89-ef67-4d4c-ac71-3bba9ff494ef is in state STARTED 2025-02-10 09:19:08.106231 | orchestrator | 2025-02-10 09:19:08 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:19:11.162521 | orchestrator | 2025-02-10 09:19:08 | INFO  | Task 795feb5b-bcd9-439b-8a0e-5ab7a71567b8 is in state STARTED 2025-02-10 09:19:11.162711 | orchestrator | 2025-02-10 09:19:08 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:19:11.162736 | orchestrator | 2025-02-10 09:19:08 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:19:11.162772 | orchestrator | 2025-02-10 09:19:11 | INFO  | Task ef2a83ec-1ad8-42ec-bda2-1521c5088a5d is in state STARTED 2025-02-10 09:19:11.167416 | orchestrator | 2025-02-10 09:19:11 | INFO  | Task c42d286d-894d-4c21-a04e-c3eb9c6fb3d3 is in state STARTED 2025-02-10 09:19:11.173394 | orchestrator | 2025-02-10 09:19:11 | INFO  | Task c0459b89-ef67-4d4c-ac71-3bba9ff494ef is in state STARTED 2025-02-10 09:19:11.173468 | orchestrator | 2025-02-10 09:19:11 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:19:11.173701 | orchestrator | 2025-02-10 09:19:11 | INFO  | Task 795feb5b-bcd9-439b-8a0e-5ab7a71567b8 is in state STARTED 2025-02-10 09:19:11.177199 | orchestrator | 2025-02-10 09:19:11 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:19:14.263632 | orchestrator | 2025-02-10 09:19:11 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:19:14.263759 | orchestrator | 2025-02-10 09:19:14 | INFO  | Task ef2a83ec-1ad8-42ec-bda2-1521c5088a5d is in state STARTED 2025-02-10 09:19:14.264565 | orchestrator | 2025-02-10 09:19:14 | INFO  | Task c42d286d-894d-4c21-a04e-c3eb9c6fb3d3 is in state SUCCESS 2025-02-10 09:19:14.264605 | orchestrator | 2025-02-10 09:19:14 | INFO  | Task c0459b89-ef67-4d4c-ac71-3bba9ff494ef is in state STARTED 2025-02-10 09:19:14.265636 | orchestrator | 2025-02-10 09:19:14 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:19:14.266873 | orchestrator | 2025-02-10 09:19:14 | INFO  | Task 795feb5b-bcd9-439b-8a0e-5ab7a71567b8 is in state STARTED 2025-02-10 09:19:14.269444 | orchestrator | 2025-02-10 09:19:14 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:19:14.270523 | orchestrator | 2025-02-10 09:19:14 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:19:17.331447 | orchestrator | 2025-02-10 09:19:14 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:19:17.331643 | orchestrator | 2025-02-10 09:19:17 | INFO  | Task ef2a83ec-1ad8-42ec-bda2-1521c5088a5d is in state STARTED 2025-02-10 09:19:17.332792 | orchestrator | 2025-02-10 09:19:17 | INFO  | Task c0459b89-ef67-4d4c-ac71-3bba9ff494ef is in state STARTED 2025-02-10 09:19:17.332840 | orchestrator | 2025-02-10 09:19:17 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:19:17.334500 | orchestrator | 2025-02-10 09:19:17 | INFO  | Task 795feb5b-bcd9-439b-8a0e-5ab7a71567b8 is in state STARTED 2025-02-10 09:19:17.335140 | orchestrator | 2025-02-10 09:19:17 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:19:17.336940 | orchestrator | 2025-02-10 09:19:17 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:19:20.402178 | orchestrator | 2025-02-10 09:19:17 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:19:20.402380 | orchestrator | 2025-02-10 09:19:20 | INFO  | Task ef2a83ec-1ad8-42ec-bda2-1521c5088a5d is in state STARTED 2025-02-10 09:19:20.407742 | orchestrator | 2025-02-10 09:19:20 | INFO  | Task c0459b89-ef67-4d4c-ac71-3bba9ff494ef is in state STARTED 2025-02-10 09:19:20.409040 | orchestrator | 2025-02-10 09:19:20 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:19:20.409079 | orchestrator | 2025-02-10 09:19:20 | INFO  | Task 795feb5b-bcd9-439b-8a0e-5ab7a71567b8 is in state STARTED 2025-02-10 09:19:20.409108 | orchestrator | 2025-02-10 09:19:20 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:19:20.409449 | orchestrator | 2025-02-10 09:19:20 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:19:20.412474 | orchestrator | 2025-02-10 09:19:20 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:19:23.491482 | orchestrator | 2025-02-10 09:19:23 | INFO  | Task ef2a83ec-1ad8-42ec-bda2-1521c5088a5d is in state STARTED 2025-02-10 09:19:23.493249 | orchestrator | 2025-02-10 09:19:23 | INFO  | Task c0459b89-ef67-4d4c-ac71-3bba9ff494ef is in state STARTED 2025-02-10 09:19:23.495658 | orchestrator | 2025-02-10 09:19:23 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:19:23.497248 | orchestrator | 2025-02-10 09:19:23 | INFO  | Task 795feb5b-bcd9-439b-8a0e-5ab7a71567b8 is in state STARTED 2025-02-10 09:19:23.499086 | orchestrator | 2025-02-10 09:19:23 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:19:23.500202 | orchestrator | 2025-02-10 09:19:23 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:19:26.569737 | orchestrator | 2025-02-10 09:19:23 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:19:26.569883 | orchestrator | 2025-02-10 09:19:26 | INFO  | Task ef2a83ec-1ad8-42ec-bda2-1521c5088a5d is in state STARTED 2025-02-10 09:19:26.574584 | orchestrator | 2025-02-10 09:19:26 | INFO  | Task c0459b89-ef67-4d4c-ac71-3bba9ff494ef is in state STARTED 2025-02-10 09:19:26.574683 | orchestrator | 2025-02-10 09:19:26 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:19:26.577648 | orchestrator | 2025-02-10 09:19:26 | INFO  | Task 795feb5b-bcd9-439b-8a0e-5ab7a71567b8 is in state STARTED 2025-02-10 09:19:29.676538 | orchestrator | 2025-02-10 09:19:26 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:19:29.676694 | orchestrator | 2025-02-10 09:19:26 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:19:29.676705 | orchestrator | 2025-02-10 09:19:26 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:19:29.676726 | orchestrator | 2025-02-10 09:19:29 | INFO  | Task ef2a83ec-1ad8-42ec-bda2-1521c5088a5d is in state STARTED 2025-02-10 09:19:29.681880 | orchestrator | 2025-02-10 09:19:29 | INFO  | Task c0459b89-ef67-4d4c-ac71-3bba9ff494ef is in state STARTED 2025-02-10 09:19:29.682205 | orchestrator | 2025-02-10 09:19:29 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:19:29.683778 | orchestrator | 2025-02-10 09:19:29 | INFO  | Task 795feb5b-bcd9-439b-8a0e-5ab7a71567b8 is in state STARTED 2025-02-10 09:19:29.688591 | orchestrator | 2025-02-10 09:19:29 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:19:32.825936 | orchestrator | 2025-02-10 09:19:29 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:19:32.826120 | orchestrator | 2025-02-10 09:19:29 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:19:32.826157 | orchestrator | 2025-02-10 09:19:32 | INFO  | Task ef2a83ec-1ad8-42ec-bda2-1521c5088a5d is in state STARTED 2025-02-10 09:19:32.828093 | orchestrator | 2025-02-10 09:19:32 | INFO  | Task c0459b89-ef67-4d4c-ac71-3bba9ff494ef is in state STARTED 2025-02-10 09:19:32.829889 | orchestrator | 2025-02-10 09:19:32 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:19:32.831938 | orchestrator | 2025-02-10 09:19:32 | INFO  | Task 795feb5b-bcd9-439b-8a0e-5ab7a71567b8 is in state STARTED 2025-02-10 09:19:32.836179 | orchestrator | 2025-02-10 09:19:32 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:19:32.837543 | orchestrator | 2025-02-10 09:19:32 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:19:35.900041 | orchestrator | 2025-02-10 09:19:32 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:19:35.900215 | orchestrator | 2025-02-10 09:19:35 | INFO  | Task ef2a83ec-1ad8-42ec-bda2-1521c5088a5d is in state STARTED 2025-02-10 09:19:35.903005 | orchestrator | 2025-02-10 09:19:35 | INFO  | Task c0459b89-ef67-4d4c-ac71-3bba9ff494ef is in state STARTED 2025-02-10 09:19:35.903188 | orchestrator | 2025-02-10 09:19:35 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:19:35.903220 | orchestrator | 2025-02-10 09:19:35 | INFO  | Task 795feb5b-bcd9-439b-8a0e-5ab7a71567b8 is in state STARTED 2025-02-10 09:19:35.909366 | orchestrator | 2025-02-10 09:19:35 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:19:35.911979 | orchestrator | 2025-02-10 09:19:35 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:19:38.968302 | orchestrator | 2025-02-10 09:19:35 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:19:38.968424 | orchestrator | 2025-02-10 09:19:38 | INFO  | Task ef2a83ec-1ad8-42ec-bda2-1521c5088a5d is in state STARTED 2025-02-10 09:19:38.968468 | orchestrator | 2025-02-10 09:19:38 | INFO  | Task c0459b89-ef67-4d4c-ac71-3bba9ff494ef is in state STARTED 2025-02-10 09:19:38.969157 | orchestrator | 2025-02-10 09:19:38 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:19:38.973471 | orchestrator | 2025-02-10 09:19:38 | INFO  | Task 795feb5b-bcd9-439b-8a0e-5ab7a71567b8 is in state STARTED 2025-02-10 09:19:38.974203 | orchestrator | 2025-02-10 09:19:38 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:19:38.975463 | orchestrator | 2025-02-10 09:19:38 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:19:38.975587 | orchestrator | 2025-02-10 09:19:38 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:19:42.041696 | orchestrator | 2025-02-10 09:19:42 | INFO  | Task ef2a83ec-1ad8-42ec-bda2-1521c5088a5d is in state SUCCESS 2025-02-10 09:19:42.041971 | orchestrator | 2025-02-10 09:19:42 | INFO  | Task c0459b89-ef67-4d4c-ac71-3bba9ff494ef is in state STARTED 2025-02-10 09:19:42.043598 | orchestrator | 2025-02-10 09:19:42 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:19:42.046701 | orchestrator | 2025-02-10 09:19:42 | INFO  | Task 795feb5b-bcd9-439b-8a0e-5ab7a71567b8 is in state STARTED 2025-02-10 09:19:42.048541 | orchestrator | 2025-02-10 09:19:42 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:19:42.051516 | orchestrator | 2025-02-10 09:19:42 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:19:42.053684 | orchestrator | 2025-02-10 09:19:42 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:19:45.096071 | orchestrator | 2025-02-10 09:19:45 | INFO  | Task c0459b89-ef67-4d4c-ac71-3bba9ff494ef is in state STARTED 2025-02-10 09:19:45.096321 | orchestrator | 2025-02-10 09:19:45 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:19:45.096342 | orchestrator | 2025-02-10 09:19:45 | INFO  | Task 795feb5b-bcd9-439b-8a0e-5ab7a71567b8 is in state STARTED 2025-02-10 09:19:45.096357 | orchestrator | 2025-02-10 09:19:45 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:19:45.097358 | orchestrator | 2025-02-10 09:19:45 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:19:48.156782 | orchestrator | 2025-02-10 09:19:45 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:19:48.156997 | orchestrator | 2025-02-10 09:19:48.157025 | orchestrator | 2025-02-10 09:19:48.157040 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-02-10 09:19:48.157055 | orchestrator | 2025-02-10 09:19:48.157069 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-02-10 09:19:48.157084 | orchestrator | Monday 10 February 2025 09:18:33 +0000 (0:00:00.181) 0:00:00.181 ******* 2025-02-10 09:19:48.157100 | orchestrator | ok: [testbed-manager] => { 2025-02-10 09:19:48.157125 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-02-10 09:19:48.157150 | orchestrator | } 2025-02-10 09:19:48.157174 | orchestrator | 2025-02-10 09:19:48.157197 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-02-10 09:19:48.157219 | orchestrator | Monday 10 February 2025 09:18:33 +0000 (0:00:00.173) 0:00:00.354 ******* 2025-02-10 09:19:48.157244 | orchestrator | ok: [testbed-manager] 2025-02-10 09:19:48.157270 | orchestrator | 2025-02-10 09:19:48.157294 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-02-10 09:19:48.157318 | orchestrator | Monday 10 February 2025 09:18:34 +0000 (0:00:01.034) 0:00:01.389 ******* 2025-02-10 09:19:48.157342 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-02-10 09:19:48.157367 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-02-10 09:19:48.157392 | orchestrator | 2025-02-10 09:19:48.157417 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-02-10 09:19:48.157434 | orchestrator | Monday 10 February 2025 09:18:35 +0000 (0:00:00.916) 0:00:02.306 ******* 2025-02-10 09:19:48.157451 | orchestrator | changed: [testbed-manager] 2025-02-10 09:19:48.157468 | orchestrator | 2025-02-10 09:19:48.157483 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-02-10 09:19:48.157499 | orchestrator | Monday 10 February 2025 09:18:38 +0000 (0:00:03.439) 0:00:05.746 ******* 2025-02-10 09:19:48.157515 | orchestrator | changed: [testbed-manager] 2025-02-10 09:19:48.157531 | orchestrator | 2025-02-10 09:19:48.157547 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-02-10 09:19:48.157583 | orchestrator | Monday 10 February 2025 09:18:40 +0000 (0:00:01.572) 0:00:07.318 ******* 2025-02-10 09:19:48.157625 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-02-10 09:19:48.157639 | orchestrator | ok: [testbed-manager] 2025-02-10 09:19:48.157653 | orchestrator | 2025-02-10 09:19:48.157667 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-02-10 09:19:48.157681 | orchestrator | Monday 10 February 2025 09:19:08 +0000 (0:00:27.739) 0:00:35.058 ******* 2025-02-10 09:19:48.157694 | orchestrator | changed: [testbed-manager] 2025-02-10 09:19:48.157708 | orchestrator | 2025-02-10 09:19:48.157721 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:19:48.157735 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:19:48.157751 | orchestrator | 2025-02-10 09:19:48.157765 | orchestrator | Monday 10 February 2025 09:19:12 +0000 (0:00:03.919) 0:00:38.977 ******* 2025-02-10 09:19:48.157778 | orchestrator | =============================================================================== 2025-02-10 09:19:48.157792 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 27.74s 2025-02-10 09:19:48.157806 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.92s 2025-02-10 09:19:48.157820 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 3.44s 2025-02-10 09:19:48.157843 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.57s 2025-02-10 09:19:48.157857 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.03s 2025-02-10 09:19:48.157877 | orchestrator | osism.services.homer : Create required directories ---------------------- 0.92s 2025-02-10 09:19:48.157900 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.17s 2025-02-10 09:19:48.157921 | orchestrator | 2025-02-10 09:19:48.157944 | orchestrator | 2025-02-10 09:19:48.157967 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-02-10 09:19:48.157990 | orchestrator | 2025-02-10 09:19:48.158136 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-02-10 09:19:48.158175 | orchestrator | Monday 10 February 2025 09:18:32 +0000 (0:00:00.227) 0:00:00.227 ******* 2025-02-10 09:19:48.158198 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-02-10 09:19:48.158214 | orchestrator | 2025-02-10 09:19:48.158228 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-02-10 09:19:48.158242 | orchestrator | Monday 10 February 2025 09:18:33 +0000 (0:00:00.325) 0:00:00.553 ******* 2025-02-10 09:19:48.158255 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-02-10 09:19:48.158269 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-02-10 09:19:48.158283 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-02-10 09:19:48.158297 | orchestrator | 2025-02-10 09:19:48.158311 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-02-10 09:19:48.158325 | orchestrator | Monday 10 February 2025 09:18:34 +0000 (0:00:01.333) 0:00:01.886 ******* 2025-02-10 09:19:48.158338 | orchestrator | changed: [testbed-manager] 2025-02-10 09:19:48.158352 | orchestrator | 2025-02-10 09:19:48.158366 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-02-10 09:19:48.158380 | orchestrator | Monday 10 February 2025 09:18:35 +0000 (0:00:01.359) 0:00:03.246 ******* 2025-02-10 09:19:48.158394 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-02-10 09:19:48.158419 | orchestrator | ok: [testbed-manager] 2025-02-10 09:19:48.158443 | orchestrator | 2025-02-10 09:19:48.158485 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-02-10 09:19:48.158510 | orchestrator | Monday 10 February 2025 09:19:27 +0000 (0:00:51.656) 0:00:54.903 ******* 2025-02-10 09:19:48.158536 | orchestrator | changed: [testbed-manager] 2025-02-10 09:19:48.158621 | orchestrator | 2025-02-10 09:19:48.158651 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-02-10 09:19:48.158674 | orchestrator | Monday 10 February 2025 09:19:30 +0000 (0:00:02.509) 0:00:57.412 ******* 2025-02-10 09:19:48.158688 | orchestrator | ok: [testbed-manager] 2025-02-10 09:19:48.158702 | orchestrator | 2025-02-10 09:19:48.158716 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-02-10 09:19:48.158729 | orchestrator | Monday 10 February 2025 09:19:32 +0000 (0:00:02.758) 0:01:00.171 ******* 2025-02-10 09:19:48.158743 | orchestrator | changed: [testbed-manager] 2025-02-10 09:19:48.158756 | orchestrator | 2025-02-10 09:19:48.158770 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-02-10 09:19:48.158784 | orchestrator | Monday 10 February 2025 09:19:37 +0000 (0:00:04.305) 0:01:04.476 ******* 2025-02-10 09:19:48.158797 | orchestrator | changed: [testbed-manager] 2025-02-10 09:19:48.158811 | orchestrator | 2025-02-10 09:19:48.158824 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-02-10 09:19:48.158837 | orchestrator | Monday 10 February 2025 09:19:38 +0000 (0:00:01.023) 0:01:05.500 ******* 2025-02-10 09:19:48.158851 | orchestrator | changed: [testbed-manager] 2025-02-10 09:19:48.158865 | orchestrator | 2025-02-10 09:19:48.158878 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-02-10 09:19:48.158891 | orchestrator | Monday 10 February 2025 09:19:39 +0000 (0:00:00.881) 0:01:06.382 ******* 2025-02-10 09:19:48.158905 | orchestrator | ok: [testbed-manager] 2025-02-10 09:19:48.158919 | orchestrator | 2025-02-10 09:19:48.158932 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:19:48.158946 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:19:48.158960 | orchestrator | 2025-02-10 09:19:48.158974 | orchestrator | Monday 10 February 2025 09:19:39 +0000 (0:00:00.384) 0:01:06.766 ******* 2025-02-10 09:19:48.158987 | orchestrator | =============================================================================== 2025-02-10 09:19:48.159001 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 51.66s 2025-02-10 09:19:48.159015 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 4.31s 2025-02-10 09:19:48.159029 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 2.76s 2025-02-10 09:19:48.159050 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.51s 2025-02-10 09:19:48.159065 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.36s 2025-02-10 09:19:48.159079 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.33s 2025-02-10 09:19:48.159092 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.02s 2025-02-10 09:19:48.159106 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.88s 2025-02-10 09:19:48.159120 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.38s 2025-02-10 09:19:48.159133 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.33s 2025-02-10 09:19:48.159147 | orchestrator | 2025-02-10 09:19:48.159160 | orchestrator | 2025-02-10 09:19:48.159174 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:19:48.159187 | orchestrator | 2025-02-10 09:19:48.159201 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:19:48.159290 | orchestrator | Monday 10 February 2025 09:18:32 +0000 (0:00:00.572) 0:00:00.572 ******* 2025-02-10 09:19:48.159308 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-02-10 09:19:48.159322 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-02-10 09:19:48.159335 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-02-10 09:19:48.159395 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-02-10 09:19:48.159412 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-02-10 09:19:48.159437 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-02-10 09:19:48.159469 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-02-10 09:19:48.159483 | orchestrator | 2025-02-10 09:19:48.159497 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-02-10 09:19:48.159524 | orchestrator | 2025-02-10 09:19:48.159538 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-02-10 09:19:48.159552 | orchestrator | Monday 10 February 2025 09:18:34 +0000 (0:00:01.596) 0:00:02.169 ******* 2025-02-10 09:19:48.159655 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:19:48.159677 | orchestrator | 2025-02-10 09:19:48.159691 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-02-10 09:19:48.159706 | orchestrator | Monday 10 February 2025 09:18:36 +0000 (0:00:02.534) 0:00:04.703 ******* 2025-02-10 09:19:48.159719 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:19:48.159735 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:19:48.159758 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:19:48.159782 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:19:48.159806 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:19:48.159830 | orchestrator | ok: [testbed-manager] 2025-02-10 09:19:48.159855 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:19:48.159879 | orchestrator | 2025-02-10 09:19:48.159896 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-02-10 09:19:48.159922 | orchestrator | Monday 10 February 2025 09:18:38 +0000 (0:00:02.273) 0:00:06.977 ******* 2025-02-10 09:19:48.159937 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:19:48.159950 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:19:48.159964 | orchestrator | ok: [testbed-manager] 2025-02-10 09:19:48.159977 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:19:48.159991 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:19:48.160004 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:19:48.160025 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:19:48.160038 | orchestrator | 2025-02-10 09:19:48.160052 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-02-10 09:19:48.160066 | orchestrator | Monday 10 February 2025 09:18:41 +0000 (0:00:02.569) 0:00:09.546 ******* 2025-02-10 09:19:48.160080 | orchestrator | changed: [testbed-manager] 2025-02-10 09:19:48.160094 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:19:48.160108 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:19:48.160122 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:19:48.160135 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:19:48.160147 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:19:48.160160 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:19:48.160172 | orchestrator | 2025-02-10 09:19:48.160184 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-02-10 09:19:48.160196 | orchestrator | Monday 10 February 2025 09:18:44 +0000 (0:00:02.581) 0:00:12.128 ******* 2025-02-10 09:19:48.160208 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:19:48.160220 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:19:48.160232 | orchestrator | changed: [testbed-manager] 2025-02-10 09:19:48.160244 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:19:48.160256 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:19:48.160268 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:19:48.160280 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:19:48.160292 | orchestrator | 2025-02-10 09:19:48.160304 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-02-10 09:19:48.160316 | orchestrator | Monday 10 February 2025 09:18:54 +0000 (0:00:10.925) 0:00:23.054 ******* 2025-02-10 09:19:48.160328 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:19:48.160341 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:19:48.160353 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:19:48.160379 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:19:48.160392 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:19:48.160415 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:19:48.160428 | orchestrator | changed: [testbed-manager] 2025-02-10 09:19:48.160441 | orchestrator | 2025-02-10 09:19:48.160506 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-02-10 09:19:48.160522 | orchestrator | Monday 10 February 2025 09:19:12 +0000 (0:00:17.582) 0:00:40.636 ******* 2025-02-10 09:19:48.160535 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:19:48.160553 | orchestrator | 2025-02-10 09:19:48.160592 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-02-10 09:19:48.160605 | orchestrator | Monday 10 February 2025 09:19:14 +0000 (0:00:02.162) 0:00:42.799 ******* 2025-02-10 09:19:48.160618 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-02-10 09:19:48.160630 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-02-10 09:19:48.160643 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-02-10 09:19:48.160655 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-02-10 09:19:48.160667 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-02-10 09:19:48.160680 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-02-10 09:19:48.160692 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-02-10 09:19:48.160704 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-02-10 09:19:48.160716 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-02-10 09:19:48.160728 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-02-10 09:19:48.160740 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-02-10 09:19:48.160752 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-02-10 09:19:48.160764 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-02-10 09:19:48.160777 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-02-10 09:19:48.160789 | orchestrator | 2025-02-10 09:19:48.160801 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-02-10 09:19:48.160814 | orchestrator | Monday 10 February 2025 09:19:23 +0000 (0:00:08.473) 0:00:51.273 ******* 2025-02-10 09:19:48.160827 | orchestrator | ok: [testbed-manager] 2025-02-10 09:19:48.160839 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:19:48.160872 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:19:48.160886 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:19:48.160898 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:19:48.160910 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:19:48.160923 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:19:48.160935 | orchestrator | 2025-02-10 09:19:48.160948 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-02-10 09:19:48.160960 | orchestrator | Monday 10 February 2025 09:19:25 +0000 (0:00:02.075) 0:00:53.348 ******* 2025-02-10 09:19:48.160972 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:19:48.160985 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:19:48.160997 | orchestrator | changed: [testbed-manager] 2025-02-10 09:19:48.161009 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:19:48.161022 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:19:48.161034 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:19:48.161045 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:19:48.161058 | orchestrator | 2025-02-10 09:19:48.161070 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-02-10 09:19:48.161088 | orchestrator | Monday 10 February 2025 09:19:29 +0000 (0:00:03.914) 0:00:57.263 ******* 2025-02-10 09:19:48.161100 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:19:48.161113 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:19:48.161133 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:19:48.161146 | orchestrator | ok: [testbed-manager] 2025-02-10 09:19:48.161166 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:19:48.161179 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:19:48.161191 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:19:48.161203 | orchestrator | 2025-02-10 09:19:48.161215 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-02-10 09:19:48.161228 | orchestrator | Monday 10 February 2025 09:19:34 +0000 (0:00:05.008) 0:01:02.271 ******* 2025-02-10 09:19:48.161240 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:19:48.161252 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:19:48.161264 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:19:48.161277 | orchestrator | ok: [testbed-manager] 2025-02-10 09:19:48.161288 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:19:48.161301 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:19:48.161313 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:19:48.161325 | orchestrator | 2025-02-10 09:19:48.161337 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-02-10 09:19:48.161350 | orchestrator | Monday 10 February 2025 09:19:37 +0000 (0:00:03.711) 0:01:05.983 ******* 2025-02-10 09:19:48.161362 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-02-10 09:19:48.161376 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:19:48.161390 | orchestrator | 2025-02-10 09:19:48.161403 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-02-10 09:19:48.161415 | orchestrator | Monday 10 February 2025 09:19:41 +0000 (0:00:03.134) 0:01:09.118 ******* 2025-02-10 09:19:48.161427 | orchestrator | changed: [testbed-manager] 2025-02-10 09:19:48.161439 | orchestrator | 2025-02-10 09:19:48.161451 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-02-10 09:19:48.161464 | orchestrator | Monday 10 February 2025 09:19:43 +0000 (0:00:02.781) 0:01:11.899 ******* 2025-02-10 09:19:48.161476 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:19:48.161496 | orchestrator | changed: [testbed-manager] 2025-02-10 09:19:48.161510 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:19:48.161523 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:19:48.161535 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:19:48.161548 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:19:48.161581 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:19:48.161596 | orchestrator | 2025-02-10 09:19:48.161608 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:19:48.161621 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:19:48.161633 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:19:48.161646 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:19:48.161664 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:19:48.161676 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:19:48.161688 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:19:48.161700 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:19:48.161713 | orchestrator | 2025-02-10 09:19:48.161731 | orchestrator | Monday 10 February 2025 09:19:47 +0000 (0:00:03.564) 0:01:15.464 ******* 2025-02-10 09:19:48.161743 | orchestrator | =============================================================================== 2025-02-10 09:19:48.161756 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 17.58s 2025-02-10 09:19:48.161769 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.93s 2025-02-10 09:19:48.161781 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 8.47s 2025-02-10 09:19:48.161793 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 5.01s 2025-02-10 09:19:48.161805 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 3.91s 2025-02-10 09:19:48.161817 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 3.71s 2025-02-10 09:19:48.161829 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.56s 2025-02-10 09:19:48.161841 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 3.13s 2025-02-10 09:19:48.161853 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.78s 2025-02-10 09:19:48.161866 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.58s 2025-02-10 09:19:48.161878 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.57s 2025-02-10 09:19:48.161890 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.53s 2025-02-10 09:19:48.161902 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.27s 2025-02-10 09:19:48.161914 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 2.16s 2025-02-10 09:19:48.161932 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 2.08s 2025-02-10 09:19:48.162123 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.60s 2025-02-10 09:19:48.162159 | orchestrator | 2025-02-10 09:19:48 | INFO  | Task c0459b89-ef67-4d4c-ac71-3bba9ff494ef is in state SUCCESS 2025-02-10 09:19:48.162207 | orchestrator | 2025-02-10 09:19:48 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:19:48.162250 | orchestrator | 2025-02-10 09:19:48 | INFO  | Task 795feb5b-bcd9-439b-8a0e-5ab7a71567b8 is in state STARTED 2025-02-10 09:19:48.162348 | orchestrator | 2025-02-10 09:19:48 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:19:48.162369 | orchestrator | 2025-02-10 09:19:48 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:19:51.235246 | orchestrator | 2025-02-10 09:19:48 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:19:51.235457 | orchestrator | 2025-02-10 09:19:51 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:19:51.237392 | orchestrator | 2025-02-10 09:19:51 | INFO  | Task 795feb5b-bcd9-439b-8a0e-5ab7a71567b8 is in state STARTED 2025-02-10 09:19:51.238174 | orchestrator | 2025-02-10 09:19:51 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:19:54.280164 | orchestrator | 2025-02-10 09:19:51 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:19:54.280362 | orchestrator | 2025-02-10 09:19:51 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:19:54.280419 | orchestrator | 2025-02-10 09:19:54 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:19:54.280704 | orchestrator | 2025-02-10 09:19:54 | INFO  | Task 795feb5b-bcd9-439b-8a0e-5ab7a71567b8 is in state STARTED 2025-02-10 09:19:54.281211 | orchestrator | 2025-02-10 09:19:54 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:19:54.281289 | orchestrator | 2025-02-10 09:19:54 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:19:54.282292 | orchestrator | 2025-02-10 09:19:54 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:19:57.321491 | orchestrator | 2025-02-10 09:19:57 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:19:57.325505 | orchestrator | 2025-02-10 09:19:57 | INFO  | Task 795feb5b-bcd9-439b-8a0e-5ab7a71567b8 is in state STARTED 2025-02-10 09:19:57.325712 | orchestrator | 2025-02-10 09:19:57 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:19:57.326540 | orchestrator | 2025-02-10 09:19:57 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:20:00.372238 | orchestrator | 2025-02-10 09:19:57 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:00.372403 | orchestrator | 2025-02-10 09:20:00 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:20:00.372725 | orchestrator | 2025-02-10 09:20:00 | INFO  | Task 795feb5b-bcd9-439b-8a0e-5ab7a71567b8 is in state SUCCESS 2025-02-10 09:20:00.373117 | orchestrator | 2025-02-10 09:20:00 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:20:00.373619 | orchestrator | 2025-02-10 09:20:00 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:20:00.373688 | orchestrator | 2025-02-10 09:20:00 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:03.439991 | orchestrator | 2025-02-10 09:20:03 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:20:06.476132 | orchestrator | 2025-02-10 09:20:03 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:20:06.476285 | orchestrator | 2025-02-10 09:20:03 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:20:06.476308 | orchestrator | 2025-02-10 09:20:03 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:06.476346 | orchestrator | 2025-02-10 09:20:06 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:20:06.476438 | orchestrator | 2025-02-10 09:20:06 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:20:06.477987 | orchestrator | 2025-02-10 09:20:06 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:20:09.521955 | orchestrator | 2025-02-10 09:20:06 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:09.522204 | orchestrator | 2025-02-10 09:20:09 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:20:09.522368 | orchestrator | 2025-02-10 09:20:09 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:20:12.575227 | orchestrator | 2025-02-10 09:20:09 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:20:12.575376 | orchestrator | 2025-02-10 09:20:09 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:12.575419 | orchestrator | 2025-02-10 09:20:12 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:20:12.579220 | orchestrator | 2025-02-10 09:20:12 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:20:12.580785 | orchestrator | 2025-02-10 09:20:12 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:20:12.581403 | orchestrator | 2025-02-10 09:20:12 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:15.629690 | orchestrator | 2025-02-10 09:20:15 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:20:15.631293 | orchestrator | 2025-02-10 09:20:15 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:20:15.631348 | orchestrator | 2025-02-10 09:20:15 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:20:18.673631 | orchestrator | 2025-02-10 09:20:15 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:18.673813 | orchestrator | 2025-02-10 09:20:18 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:20:18.673888 | orchestrator | 2025-02-10 09:20:18 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:20:18.674111 | orchestrator | 2025-02-10 09:20:18 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:20:18.674178 | orchestrator | 2025-02-10 09:20:18 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:21.716147 | orchestrator | 2025-02-10 09:20:21 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:20:21.716382 | orchestrator | 2025-02-10 09:20:21 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:20:21.716432 | orchestrator | 2025-02-10 09:20:21 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:20:24.764837 | orchestrator | 2025-02-10 09:20:21 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:24.765001 | orchestrator | 2025-02-10 09:20:24 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:20:24.772082 | orchestrator | 2025-02-10 09:20:24 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:20:24.773246 | orchestrator | 2025-02-10 09:20:24 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:20:27.851959 | orchestrator | 2025-02-10 09:20:24 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:27.852119 | orchestrator | 2025-02-10 09:20:27 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:20:27.852198 | orchestrator | 2025-02-10 09:20:27 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:20:27.852216 | orchestrator | 2025-02-10 09:20:27 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:20:27.852233 | orchestrator | 2025-02-10 09:20:27 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:30.893363 | orchestrator | 2025-02-10 09:20:30 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:20:30.893761 | orchestrator | 2025-02-10 09:20:30 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:20:30.893825 | orchestrator | 2025-02-10 09:20:30 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:20:30.895287 | orchestrator | 2025-02-10 09:20:30 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:33.929469 | orchestrator | 2025-02-10 09:20:33 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:20:36.974105 | orchestrator | 2025-02-10 09:20:33 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:20:36.974214 | orchestrator | 2025-02-10 09:20:33 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:20:36.974224 | orchestrator | 2025-02-10 09:20:33 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:36.974245 | orchestrator | 2025-02-10 09:20:36 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:20:36.974341 | orchestrator | 2025-02-10 09:20:36 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:20:36.975897 | orchestrator | 2025-02-10 09:20:36 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:20:40.026527 | orchestrator | 2025-02-10 09:20:36 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:40.026803 | orchestrator | 2025-02-10 09:20:40 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:20:40.027983 | orchestrator | 2025-02-10 09:20:40 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:20:40.029057 | orchestrator | 2025-02-10 09:20:40 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:20:43.067704 | orchestrator | 2025-02-10 09:20:40 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:43.067847 | orchestrator | 2025-02-10 09:20:43 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:20:43.069093 | orchestrator | 2025-02-10 09:20:43 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:20:43.072028 | orchestrator | 2025-02-10 09:20:43 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:20:46.112670 | orchestrator | 2025-02-10 09:20:43 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:46.112849 | orchestrator | 2025-02-10 09:20:46 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:20:46.114953 | orchestrator | 2025-02-10 09:20:46 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:20:46.116856 | orchestrator | 2025-02-10 09:20:46 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:20:49.162242 | orchestrator | 2025-02-10 09:20:46 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:49.162394 | orchestrator | 2025-02-10 09:20:49 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:20:49.166082 | orchestrator | 2025-02-10 09:20:49 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:20:49.166172 | orchestrator | 2025-02-10 09:20:49 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:20:52.214847 | orchestrator | 2025-02-10 09:20:49 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:52.215016 | orchestrator | 2025-02-10 09:20:52 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:20:52.219062 | orchestrator | 2025-02-10 09:20:52 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state STARTED 2025-02-10 09:20:52.221312 | orchestrator | 2025-02-10 09:20:52 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:20:55.281545 | orchestrator | 2025-02-10 09:20:52 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:55.281739 | orchestrator | 2025-02-10 09:20:55 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:20:55.288443 | orchestrator | 2025-02-10 09:20:55 | INFO  | Task 32362e35-ede7-496a-ac3c-77037eccd8fa is in state SUCCESS 2025-02-10 09:20:55.290375 | orchestrator | 2025-02-10 09:20:55.290432 | orchestrator | 2025-02-10 09:20:55.290448 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-02-10 09:20:55.290464 | orchestrator | 2025-02-10 09:20:55.290478 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-02-10 09:20:55.290494 | orchestrator | Monday 10 February 2025 09:18:51 +0000 (0:00:00.487) 0:00:00.487 ******* 2025-02-10 09:20:55.290508 | orchestrator | ok: [testbed-manager] 2025-02-10 09:20:55.290524 | orchestrator | 2025-02-10 09:20:55.290566 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-02-10 09:20:55.290631 | orchestrator | Monday 10 February 2025 09:18:53 +0000 (0:00:02.074) 0:00:02.562 ******* 2025-02-10 09:20:55.290648 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-02-10 09:20:55.290670 | orchestrator | 2025-02-10 09:20:55.290685 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-02-10 09:20:55.290698 | orchestrator | Monday 10 February 2025 09:18:54 +0000 (0:00:01.270) 0:00:03.833 ******* 2025-02-10 09:20:55.290712 | orchestrator | changed: [testbed-manager] 2025-02-10 09:20:55.290727 | orchestrator | 2025-02-10 09:20:55.290740 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-02-10 09:20:55.290754 | orchestrator | Monday 10 February 2025 09:18:57 +0000 (0:00:03.249) 0:00:07.083 ******* 2025-02-10 09:20:55.290768 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-02-10 09:20:55.290782 | orchestrator | ok: [testbed-manager] 2025-02-10 09:20:55.290796 | orchestrator | 2025-02-10 09:20:55.290810 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-02-10 09:20:55.290824 | orchestrator | Monday 10 February 2025 09:19:55 +0000 (0:00:58.008) 0:01:05.091 ******* 2025-02-10 09:20:55.290837 | orchestrator | changed: [testbed-manager] 2025-02-10 09:20:55.290851 | orchestrator | 2025-02-10 09:20:55.290865 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:20:55.290879 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:20:55.290895 | orchestrator | 2025-02-10 09:20:55.290909 | orchestrator | Monday 10 February 2025 09:19:59 +0000 (0:00:03.709) 0:01:08.800 ******* 2025-02-10 09:20:55.290922 | orchestrator | =============================================================================== 2025-02-10 09:20:55.290936 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 58.01s 2025-02-10 09:20:55.290950 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.71s 2025-02-10 09:20:55.290965 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 3.25s 2025-02-10 09:20:55.290980 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 2.08s 2025-02-10 09:20:55.290996 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 1.27s 2025-02-10 09:20:55.291011 | orchestrator | 2025-02-10 09:20:55.291027 | orchestrator | 2025-02-10 09:20:55.291043 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-02-10 09:20:55.291058 | orchestrator | 2025-02-10 09:20:55.291073 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-02-10 09:20:55.291089 | orchestrator | Monday 10 February 2025 09:18:27 +0000 (0:00:00.375) 0:00:00.375 ******* 2025-02-10 09:20:55.291105 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:20:55.291122 | orchestrator | 2025-02-10 09:20:55.291136 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-02-10 09:20:55.291149 | orchestrator | Monday 10 February 2025 09:18:29 +0000 (0:00:01.812) 0:00:02.187 ******* 2025-02-10 09:20:55.291163 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-02-10 09:20:55.291177 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-02-10 09:20:55.291191 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-02-10 09:20:55.291204 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-02-10 09:20:55.291218 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-02-10 09:20:55.291232 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-02-10 09:20:55.291245 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-02-10 09:20:55.291260 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-02-10 09:20:55.291285 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-02-10 09:20:55.291303 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-02-10 09:20:55.291317 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-02-10 09:20:55.291331 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-02-10 09:20:55.291345 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-02-10 09:20:55.291359 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-02-10 09:20:55.291372 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-02-10 09:20:55.291391 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-02-10 09:20:55.291405 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-02-10 09:20:55.291429 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-02-10 09:20:55.291443 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-02-10 09:20:55.291457 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-02-10 09:20:55.291471 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-02-10 09:20:55.291485 | orchestrator | 2025-02-10 09:20:55.291499 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-02-10 09:20:55.291513 | orchestrator | Monday 10 February 2025 09:18:33 +0000 (0:00:04.452) 0:00:06.639 ******* 2025-02-10 09:20:55.291527 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:20:55.291548 | orchestrator | 2025-02-10 09:20:55.291562 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-02-10 09:20:55.291576 | orchestrator | Monday 10 February 2025 09:18:35 +0000 (0:00:01.796) 0:00:08.436 ******* 2025-02-10 09:20:55.291594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:20:55.291631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:20:55.291647 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:20:55.291669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:20:55.291683 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:20:55.291705 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:20:55.291720 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:20:55.291735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.291750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.291764 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.291789 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.291804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.291833 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.291848 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.291867 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.291883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.291898 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.291919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.291933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.291948 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.291962 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.291976 | orchestrator | 2025-02-10 09:20:55.291995 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-02-10 09:20:55.292010 | orchestrator | Monday 10 February 2025 09:18:39 +0000 (0:00:04.462) 0:00:12.898 ******* 2025-02-10 09:20:55.292024 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-10 09:20:55.292039 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:20:55.292058 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:20:55.292073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-10 09:20:55.292095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:20:55.292109 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:20:55.292124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:20:55.292138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-10 09:20:55.292160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:20:55.292175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:20:55.292189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-10 09:20:55.292204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:20:55.292225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:20:55.292239 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:20:55.292253 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:20:55.292267 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-10 09:20:55.292281 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:20:55.292302 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:20:55.292316 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:20:55.292330 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:20:55.292344 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-10 09:20:55.292358 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:20:55.292379 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:20:55.292393 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:20:55.292407 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-10 09:20:55.292422 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:20:55.292436 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:20:55.292451 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:20:55.292464 | orchestrator | 2025-02-10 09:20:55.292478 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-02-10 09:20:55.292493 | orchestrator | Monday 10 February 2025 09:18:41 +0000 (0:00:01.819) 0:00:14.717 ******* 2025-02-10 09:20:55.292513 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-10 09:20:55.292825 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:20:55.292852 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:20:55.292878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-10 09:20:55.292894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:20:55.292916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:20:55.292932 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:20:55.292947 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:20:55.292963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-10 09:20:55.292986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:20:55.293003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:20:55.293019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-10 09:20:55.293040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:20:55.293056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:20:55.293070 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:20:55.293088 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-10 09:20:55.293101 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:20:55.293115 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:20:55.293129 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:20:55.293142 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:20:55.293171 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-10 09:20:55.293192 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:20:55.293206 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:20:55.293220 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:20:55.293252 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-10 09:20:55.293265 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:20:55.293278 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:20:55.293291 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:20:55.293303 | orchestrator | 2025-02-10 09:20:55.293316 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-02-10 09:20:55.293328 | orchestrator | Monday 10 February 2025 09:18:45 +0000 (0:00:03.857) 0:00:18.575 ******* 2025-02-10 09:20:55.293382 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:20:55.293395 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:20:55.293408 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:20:55.293420 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:20:55.293432 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:20:55.293444 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:20:55.293458 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:20:55.293472 | orchestrator | 2025-02-10 09:20:55.293485 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-02-10 09:20:55.293499 | orchestrator | Monday 10 February 2025 09:18:47 +0000 (0:00:02.131) 0:00:20.706 ******* 2025-02-10 09:20:55.293513 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:20:55.293527 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:20:55.293547 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:20:55.293561 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:20:55.293574 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:20:55.293588 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:20:55.293621 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:20:55.293635 | orchestrator | 2025-02-10 09:20:55.293654 | orchestrator | TASK [common : Ensure fluentd image is present for label check] **************** 2025-02-10 09:20:55.293668 | orchestrator | Monday 10 February 2025 09:18:49 +0000 (0:00:02.323) 0:00:23.032 ******* 2025-02-10 09:20:55.293682 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:20:55.293696 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:20:55.293710 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:20:55.293724 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:20:55.293737 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:20:55.293751 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:20:55.293764 | orchestrator | changed: [testbed-manager] 2025-02-10 09:20:55.293778 | orchestrator | 2025-02-10 09:20:55.293792 | orchestrator | TASK [common : Fetch fluentd Docker image labels] ****************************** 2025-02-10 09:20:55.293806 | orchestrator | Monday 10 February 2025 09:19:29 +0000 (0:00:39.687) 0:01:02.720 ******* 2025-02-10 09:20:55.293818 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:20:55.293830 | orchestrator | ok: [testbed-manager] 2025-02-10 09:20:55.293842 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:20:55.293854 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:20:55.293866 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:20:55.293884 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:20:55.293896 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:20:55.293908 | orchestrator | 2025-02-10 09:20:55.293921 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-02-10 09:20:55.293933 | orchestrator | Monday 10 February 2025 09:19:33 +0000 (0:00:04.286) 0:01:07.006 ******* 2025-02-10 09:20:55.293945 | orchestrator | ok: [testbed-manager] 2025-02-10 09:20:55.293957 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:20:55.293969 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:20:55.293982 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:20:55.293993 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:20:55.294005 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:20:55.294059 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:20:55.294075 | orchestrator | 2025-02-10 09:20:55.294087 | orchestrator | TASK [common : Fetch fluentd Podman image labels] ****************************** 2025-02-10 09:20:55.294100 | orchestrator | Monday 10 February 2025 09:19:36 +0000 (0:00:02.421) 0:01:09.428 ******* 2025-02-10 09:20:55.294112 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:20:55.294124 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:20:55.294136 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:20:55.294148 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:20:55.294160 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:20:55.294172 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:20:55.294185 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:20:55.294197 | orchestrator | 2025-02-10 09:20:55.294209 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-02-10 09:20:55.294221 | orchestrator | Monday 10 February 2025 09:19:38 +0000 (0:00:01.746) 0:01:11.174 ******* 2025-02-10 09:20:55.294233 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:20:55.294245 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:20:55.294257 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:20:55.294269 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:20:55.294281 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:20:55.294294 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:20:55.294306 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:20:55.294318 | orchestrator | 2025-02-10 09:20:55.294331 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-02-10 09:20:55.294343 | orchestrator | Monday 10 February 2025 09:19:39 +0000 (0:00:01.090) 0:01:12.265 ******* 2025-02-10 09:20:55.294363 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:20:55.294380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:20:55.294394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:20:55.294427 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.294442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:20:55.294455 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:20:55.294467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.294494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.294507 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.294520 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:20:55.294540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.294554 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:20:55.294567 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.294579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.294622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.294637 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.294664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.294678 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.294705 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.294719 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.294732 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.294744 | orchestrator | 2025-02-10 09:20:55.294757 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-02-10 09:20:55.294769 | orchestrator | Monday 10 February 2025 09:19:44 +0000 (0:00:05.571) 0:01:17.836 ******* 2025-02-10 09:20:55.294782 | orchestrator | [WARNING]: Skipped 2025-02-10 09:20:55.294801 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-02-10 09:20:55.294814 | orchestrator | to this access issue: 2025-02-10 09:20:55.294826 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-02-10 09:20:55.294838 | orchestrator | directory 2025-02-10 09:20:55.294850 | orchestrator | ok: [testbed-manager -> localhost] 2025-02-10 09:20:55.294863 | orchestrator | 2025-02-10 09:20:55.294875 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-02-10 09:20:55.294888 | orchestrator | Monday 10 February 2025 09:19:45 +0000 (0:00:01.110) 0:01:18.947 ******* 2025-02-10 09:20:55.294900 | orchestrator | [WARNING]: Skipped 2025-02-10 09:20:55.294917 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-02-10 09:20:55.294930 | orchestrator | to this access issue: 2025-02-10 09:20:55.294942 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-02-10 09:20:55.294955 | orchestrator | directory 2025-02-10 09:20:55.294967 | orchestrator | ok: [testbed-manager -> localhost] 2025-02-10 09:20:55.294980 | orchestrator | 2025-02-10 09:20:55.294992 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-02-10 09:20:55.295004 | orchestrator | Monday 10 February 2025 09:19:46 +0000 (0:00:00.769) 0:01:19.717 ******* 2025-02-10 09:20:55.295016 | orchestrator | [WARNING]: Skipped 2025-02-10 09:20:55.295028 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-02-10 09:20:55.295041 | orchestrator | to this access issue: 2025-02-10 09:20:55.295053 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-02-10 09:20:55.295066 | orchestrator | directory 2025-02-10 09:20:55.295078 | orchestrator | ok: [testbed-manager -> localhost] 2025-02-10 09:20:55.295090 | orchestrator | 2025-02-10 09:20:55.295103 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-02-10 09:20:55.295115 | orchestrator | Monday 10 February 2025 09:19:47 +0000 (0:00:00.682) 0:01:20.399 ******* 2025-02-10 09:20:55.295127 | orchestrator | [WARNING]: Skipped 2025-02-10 09:20:55.295139 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-02-10 09:20:55.295151 | orchestrator | to this access issue: 2025-02-10 09:20:55.295163 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-02-10 09:20:55.295175 | orchestrator | directory 2025-02-10 09:20:55.295188 | orchestrator | ok: [testbed-manager -> localhost] 2025-02-10 09:20:55.295200 | orchestrator | 2025-02-10 09:20:55.295212 | orchestrator | TASK [common : Copying over td-agent.conf] ************************************* 2025-02-10 09:20:55.295225 | orchestrator | Monday 10 February 2025 09:19:48 +0000 (0:00:01.081) 0:01:21.481 ******* 2025-02-10 09:20:55.295237 | orchestrator | changed: [testbed-manager] 2025-02-10 09:20:55.295249 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:20:55.295261 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:20:55.295273 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:20:55.295286 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:20:55.295298 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:20:55.295310 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:20:55.295322 | orchestrator | 2025-02-10 09:20:55.295334 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-02-10 09:20:55.295346 | orchestrator | Monday 10 February 2025 09:19:53 +0000 (0:00:04.727) 0:01:26.208 ******* 2025-02-10 09:20:55.295359 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-02-10 09:20:55.295371 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-02-10 09:20:55.295384 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-02-10 09:20:55.295401 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-02-10 09:20:55.295421 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-02-10 09:20:55.295434 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-02-10 09:20:55.295446 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-02-10 09:20:55.295459 | orchestrator | 2025-02-10 09:20:55.295471 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-02-10 09:20:55.295484 | orchestrator | Monday 10 February 2025 09:19:56 +0000 (0:00:03.243) 0:01:29.452 ******* 2025-02-10 09:20:55.295496 | orchestrator | changed: [testbed-manager] 2025-02-10 09:20:55.295508 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:20:55.295521 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:20:55.295533 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:20:55.295545 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:20:55.295557 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:20:55.295570 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:20:55.295582 | orchestrator | 2025-02-10 09:20:55.295594 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-02-10 09:20:55.295624 | orchestrator | Monday 10 February 2025 09:19:59 +0000 (0:00:02.997) 0:01:32.449 ******* 2025-02-10 09:20:55.295641 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:20:55.295655 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:20:55.295668 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:20:55.295682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:20:55.295695 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.295719 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:20:55.295737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:20:55.295750 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.295763 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.295776 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:20:55.295789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:20:55.295802 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:20:55.295821 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:20:55.295840 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:20:55.295857 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:20:55.295870 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.295883 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:20:55.295896 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:20:55.295909 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.295932 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.295945 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.295958 | orchestrator | 2025-02-10 09:20:55.295981 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-02-10 09:20:55.295994 | orchestrator | Monday 10 February 2025 09:20:01 +0000 (0:00:02.292) 0:01:34.742 ******* 2025-02-10 09:20:55.296006 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-02-10 09:20:55.296018 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-02-10 09:20:55.296031 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-02-10 09:20:55.296043 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-02-10 09:20:55.296055 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-02-10 09:20:55.296068 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-02-10 09:20:55.296080 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-02-10 09:20:55.296092 | orchestrator | 2025-02-10 09:20:55.296105 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-02-10 09:20:55.296117 | orchestrator | Monday 10 February 2025 09:20:03 +0000 (0:00:02.233) 0:01:36.976 ******* 2025-02-10 09:20:55.296129 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-02-10 09:20:55.296141 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-02-10 09:20:55.296153 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-02-10 09:20:55.296165 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-02-10 09:20:55.296177 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-02-10 09:20:55.296189 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-02-10 09:20:55.296202 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-02-10 09:20:55.296214 | orchestrator | 2025-02-10 09:20:55.296226 | orchestrator | TASK [common : Check common containers] **************************************** 2025-02-10 09:20:55.296238 | orchestrator | Monday 10 February 2025 09:20:06 +0000 (0:00:02.165) 0:01:39.141 ******* 2025-02-10 09:20:55.296251 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:20:55.296269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:20:55.296282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:20:55.296311 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.296325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.296338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:20:55.296351 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:20:55.296364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.296382 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.296395 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:20:55.296412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.296441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.296454 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.296468 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:20:55.296480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.296499 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.296512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.296525 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.296543 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.296622 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.296641 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:20:55.296655 | orchestrator | 2025-02-10 09:20:55.296667 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-02-10 09:20:55.296680 | orchestrator | Monday 10 February 2025 09:20:09 +0000 (0:00:03.691) 0:01:42.833 ******* 2025-02-10 09:20:55.296692 | orchestrator | changed: [testbed-manager] 2025-02-10 09:20:55.296705 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:20:55.296717 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:20:55.296730 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:20:55.296742 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:20:55.296755 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:20:55.296771 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:20:55.296784 | orchestrator | 2025-02-10 09:20:55.296796 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-02-10 09:20:55.296818 | orchestrator | Monday 10 February 2025 09:20:11 +0000 (0:00:01.962) 0:01:44.795 ******* 2025-02-10 09:20:55.296831 | orchestrator | changed: [testbed-manager] 2025-02-10 09:20:55.296843 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:20:55.296855 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:20:55.296867 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:20:55.296880 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:20:55.296892 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:20:55.296904 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:20:55.296916 | orchestrator | 2025-02-10 09:20:55.296929 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-02-10 09:20:55.296941 | orchestrator | Monday 10 February 2025 09:20:13 +0000 (0:00:01.569) 0:01:46.364 ******* 2025-02-10 09:20:55.296954 | orchestrator | 2025-02-10 09:20:55.296966 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-02-10 09:20:55.296978 | orchestrator | Monday 10 February 2025 09:20:13 +0000 (0:00:00.063) 0:01:46.428 ******* 2025-02-10 09:20:55.296990 | orchestrator | 2025-02-10 09:20:55.297003 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-02-10 09:20:55.297015 | orchestrator | Monday 10 February 2025 09:20:13 +0000 (0:00:00.056) 0:01:46.485 ******* 2025-02-10 09:20:55.297028 | orchestrator | 2025-02-10 09:20:55.297040 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-02-10 09:20:55.297052 | orchestrator | Monday 10 February 2025 09:20:13 +0000 (0:00:00.059) 0:01:46.544 ******* 2025-02-10 09:20:55.297064 | orchestrator | 2025-02-10 09:20:55.297076 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-02-10 09:20:55.297089 | orchestrator | Monday 10 February 2025 09:20:13 +0000 (0:00:00.244) 0:01:46.789 ******* 2025-02-10 09:20:55.297101 | orchestrator | 2025-02-10 09:20:55.297113 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-02-10 09:20:55.297125 | orchestrator | Monday 10 February 2025 09:20:13 +0000 (0:00:00.050) 0:01:46.839 ******* 2025-02-10 09:20:55.297138 | orchestrator | 2025-02-10 09:20:55.297150 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-02-10 09:20:55.297162 | orchestrator | Monday 10 February 2025 09:20:13 +0000 (0:00:00.050) 0:01:46.889 ******* 2025-02-10 09:20:55.297174 | orchestrator | 2025-02-10 09:20:55.297187 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-02-10 09:20:55.297199 | orchestrator | Monday 10 February 2025 09:20:13 +0000 (0:00:00.069) 0:01:46.959 ******* 2025-02-10 09:20:55.297212 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:20:55.297224 | orchestrator | changed: [testbed-manager] 2025-02-10 09:20:55.297236 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:20:55.297248 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:20:55.297261 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:20:55.297273 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:20:55.297285 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:20:55.297297 | orchestrator | 2025-02-10 09:20:55.297310 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-02-10 09:20:55.297322 | orchestrator | Monday 10 February 2025 09:20:22 +0000 (0:00:08.262) 0:01:55.222 ******* 2025-02-10 09:20:55.297334 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:20:55.297347 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:20:55.297359 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:20:55.297371 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:20:55.297383 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:20:55.297395 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:20:55.297408 | orchestrator | changed: [testbed-manager] 2025-02-10 09:20:55.297420 | orchestrator | 2025-02-10 09:20:55.297432 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-02-10 09:20:55.297444 | orchestrator | Monday 10 February 2025 09:20:42 +0000 (0:00:20.206) 0:02:15.428 ******* 2025-02-10 09:20:55.297457 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:20:55.297469 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:20:55.297487 | orchestrator | ok: [testbed-manager] 2025-02-10 09:20:55.297500 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:20:55.297512 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:20:55.297531 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:20:58.334784 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:20:58.334931 | orchestrator | 2025-02-10 09:20:58.334954 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-02-10 09:20:58.334971 | orchestrator | Monday 10 February 2025 09:20:45 +0000 (0:00:02.757) 0:02:18.186 ******* 2025-02-10 09:20:58.334985 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:20:58.335000 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:20:58.335014 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:20:58.335028 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:20:58.335042 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:20:58.335056 | orchestrator | changed: [testbed-manager] 2025-02-10 09:20:58.335070 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:20:58.335084 | orchestrator | 2025-02-10 09:20:58.335098 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:20:58.335114 | orchestrator | testbed-manager : ok=25  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-10 09:20:58.335129 | orchestrator | testbed-node-0 : ok=21  changed=14  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-10 09:20:58.335143 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-10 09:20:58.335157 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-10 09:20:58.335171 | orchestrator | testbed-node-3 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-10 09:20:58.335184 | orchestrator | testbed-node-4 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-10 09:20:58.335198 | orchestrator | testbed-node-5 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-10 09:20:58.335212 | orchestrator | 2025-02-10 09:20:58.335226 | orchestrator | 2025-02-10 09:20:58.335240 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:20:58.335254 | orchestrator | Monday 10 February 2025 09:20:54 +0000 (0:00:09.598) 0:02:27.785 ******* 2025-02-10 09:20:58.335268 | orchestrator | =============================================================================== 2025-02-10 09:20:58.335282 | orchestrator | common : Ensure fluentd image is present for label check --------------- 39.69s 2025-02-10 09:20:58.335296 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 20.21s 2025-02-10 09:20:58.335342 | orchestrator | common : Restart cron container ----------------------------------------- 9.60s 2025-02-10 09:20:58.335358 | orchestrator | common : Restart fluentd container -------------------------------------- 8.26s 2025-02-10 09:20:58.335372 | orchestrator | common : Copying over config.json files for services -------------------- 5.57s 2025-02-10 09:20:58.335385 | orchestrator | common : Copying over td-agent.conf ------------------------------------- 4.73s 2025-02-10 09:20:58.335399 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.46s 2025-02-10 09:20:58.335413 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.45s 2025-02-10 09:20:58.335427 | orchestrator | common : Fetch fluentd Docker image labels ------------------------------ 4.29s 2025-02-10 09:20:58.335440 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.86s 2025-02-10 09:20:58.335454 | orchestrator | common : Check common containers ---------------------------------------- 3.69s 2025-02-10 09:20:58.335497 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.24s 2025-02-10 09:20:58.335512 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.00s 2025-02-10 09:20:58.335526 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.76s 2025-02-10 09:20:58.335540 | orchestrator | common : Set fluentd facts ---------------------------------------------- 2.42s 2025-02-10 09:20:58.335553 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 2.32s 2025-02-10 09:20:58.335567 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.29s 2025-02-10 09:20:58.335581 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.23s 2025-02-10 09:20:58.335594 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.17s 2025-02-10 09:20:58.335639 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 2.13s 2025-02-10 09:20:58.335667 | orchestrator | 2025-02-10 09:20:55 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:20:58.335691 | orchestrator | 2025-02-10 09:20:55 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:58.335731 | orchestrator | 2025-02-10 09:20:58 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:20:58.335820 | orchestrator | 2025-02-10 09:20:58 | INFO  | Task d666a261-d102-416d-926b-3db92b001c0e is in state STARTED 2025-02-10 09:20:58.335843 | orchestrator | 2025-02-10 09:20:58 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:20:58.336444 | orchestrator | 2025-02-10 09:20:58 | INFO  | Task a627d320-9f36-45f7-b6f8-6eebb79ece4f is in state STARTED 2025-02-10 09:20:58.337158 | orchestrator | 2025-02-10 09:20:58 | INFO  | Task 335664ec-bdec-4b15-93da-63e1d40d2b77 is in state STARTED 2025-02-10 09:20:58.337800 | orchestrator | 2025-02-10 09:20:58 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:21:01.385566 | orchestrator | 2025-02-10 09:20:58 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:21:01.385798 | orchestrator | 2025-02-10 09:21:01 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:21:01.386320 | orchestrator | 2025-02-10 09:21:01 | INFO  | Task d666a261-d102-416d-926b-3db92b001c0e is in state STARTED 2025-02-10 09:21:01.386379 | orchestrator | 2025-02-10 09:21:01 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:21:01.393960 | orchestrator | 2025-02-10 09:21:01 | INFO  | Task a627d320-9f36-45f7-b6f8-6eebb79ece4f is in state STARTED 2025-02-10 09:21:01.394872 | orchestrator | 2025-02-10 09:21:01 | INFO  | Task 335664ec-bdec-4b15-93da-63e1d40d2b77 is in state STARTED 2025-02-10 09:21:01.395726 | orchestrator | 2025-02-10 09:21:01 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:21:04.437860 | orchestrator | 2025-02-10 09:21:01 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:21:04.438074 | orchestrator | 2025-02-10 09:21:04 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:21:04.438130 | orchestrator | 2025-02-10 09:21:04 | INFO  | Task d666a261-d102-416d-926b-3db92b001c0e is in state STARTED 2025-02-10 09:21:04.442464 | orchestrator | 2025-02-10 09:21:04 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:21:04.443584 | orchestrator | 2025-02-10 09:21:04 | INFO  | Task a627d320-9f36-45f7-b6f8-6eebb79ece4f is in state STARTED 2025-02-10 09:21:04.444832 | orchestrator | 2025-02-10 09:21:04 | INFO  | Task 335664ec-bdec-4b15-93da-63e1d40d2b77 is in state STARTED 2025-02-10 09:21:04.446839 | orchestrator | 2025-02-10 09:21:04 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:21:07.498677 | orchestrator | 2025-02-10 09:21:04 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:21:07.498839 | orchestrator | 2025-02-10 09:21:07 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:21:07.500275 | orchestrator | 2025-02-10 09:21:07 | INFO  | Task d666a261-d102-416d-926b-3db92b001c0e is in state STARTED 2025-02-10 09:21:07.519658 | orchestrator | 2025-02-10 09:21:07 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:21:07.521291 | orchestrator | 2025-02-10 09:21:07 | INFO  | Task a627d320-9f36-45f7-b6f8-6eebb79ece4f is in state STARTED 2025-02-10 09:21:07.523117 | orchestrator | 2025-02-10 09:21:07 | INFO  | Task 335664ec-bdec-4b15-93da-63e1d40d2b77 is in state STARTED 2025-02-10 09:21:07.523743 | orchestrator | 2025-02-10 09:21:07 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:21:10.572940 | orchestrator | 2025-02-10 09:21:07 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:21:10.573101 | orchestrator | 2025-02-10 09:21:10 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:21:10.575086 | orchestrator | 2025-02-10 09:21:10 | INFO  | Task d666a261-d102-416d-926b-3db92b001c0e is in state STARTED 2025-02-10 09:21:10.578739 | orchestrator | 2025-02-10 09:21:10 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:21:10.582831 | orchestrator | 2025-02-10 09:21:10 | INFO  | Task a627d320-9f36-45f7-b6f8-6eebb79ece4f is in state STARTED 2025-02-10 09:21:10.583434 | orchestrator | 2025-02-10 09:21:10 | INFO  | Task 335664ec-bdec-4b15-93da-63e1d40d2b77 is in state STARTED 2025-02-10 09:21:10.589432 | orchestrator | 2025-02-10 09:21:10 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:21:13.676522 | orchestrator | 2025-02-10 09:21:10 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:21:13.676774 | orchestrator | 2025-02-10 09:21:13 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:21:13.679070 | orchestrator | 2025-02-10 09:21:13 | INFO  | Task d666a261-d102-416d-926b-3db92b001c0e is in state STARTED 2025-02-10 09:21:13.679453 | orchestrator | 2025-02-10 09:21:13 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:21:13.683668 | orchestrator | 2025-02-10 09:21:13 | INFO  | Task a627d320-9f36-45f7-b6f8-6eebb79ece4f is in state STARTED 2025-02-10 09:21:13.686414 | orchestrator | 2025-02-10 09:21:13 | INFO  | Task 335664ec-bdec-4b15-93da-63e1d40d2b77 is in state STARTED 2025-02-10 09:21:13.687650 | orchestrator | 2025-02-10 09:21:13 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:21:16.742739 | orchestrator | 2025-02-10 09:21:13 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:21:16.742906 | orchestrator | 2025-02-10 09:21:16 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:21:16.743546 | orchestrator | 2025-02-10 09:21:16 | INFO  | Task d666a261-d102-416d-926b-3db92b001c0e is in state STARTED 2025-02-10 09:21:16.743583 | orchestrator | 2025-02-10 09:21:16 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:21:16.744450 | orchestrator | 2025-02-10 09:21:16 | INFO  | Task a627d320-9f36-45f7-b6f8-6eebb79ece4f is in state STARTED 2025-02-10 09:21:16.744968 | orchestrator | 2025-02-10 09:21:16 | INFO  | Task 335664ec-bdec-4b15-93da-63e1d40d2b77 is in state STARTED 2025-02-10 09:21:16.745922 | orchestrator | 2025-02-10 09:21:16 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:21:16.746100 | orchestrator | 2025-02-10 09:21:16 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:21:19.784333 | orchestrator | 2025-02-10 09:21:19 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:21:19.785231 | orchestrator | 2025-02-10 09:21:19 | INFO  | Task d666a261-d102-416d-926b-3db92b001c0e is in state STARTED 2025-02-10 09:21:19.789164 | orchestrator | 2025-02-10 09:21:19 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:21:19.795388 | orchestrator | 2025-02-10 09:21:19 | INFO  | Task a627d320-9f36-45f7-b6f8-6eebb79ece4f is in state STARTED 2025-02-10 09:21:19.800406 | orchestrator | 2025-02-10 09:21:19 | INFO  | Task 335664ec-bdec-4b15-93da-63e1d40d2b77 is in state STARTED 2025-02-10 09:21:19.802764 | orchestrator | 2025-02-10 09:21:19 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:21:22.841277 | orchestrator | 2025-02-10 09:21:19 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:21:22.841449 | orchestrator | 2025-02-10 09:21:22 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:21:22.842846 | orchestrator | 2025-02-10 09:21:22 | INFO  | Task d666a261-d102-416d-926b-3db92b001c0e is in state STARTED 2025-02-10 09:21:22.843941 | orchestrator | 2025-02-10 09:21:22 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:21:22.844922 | orchestrator | 2025-02-10 09:21:22 | INFO  | Task a627d320-9f36-45f7-b6f8-6eebb79ece4f is in state STARTED 2025-02-10 09:21:22.845822 | orchestrator | 2025-02-10 09:21:22 | INFO  | Task 335664ec-bdec-4b15-93da-63e1d40d2b77 is in state STARTED 2025-02-10 09:21:22.846991 | orchestrator | 2025-02-10 09:21:22 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:21:25.896210 | orchestrator | 2025-02-10 09:21:22 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:21:25.896370 | orchestrator | 2025-02-10 09:21:25 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:21:25.899867 | orchestrator | 2025-02-10 09:21:25 | INFO  | Task d666a261-d102-416d-926b-3db92b001c0e is in state STARTED 2025-02-10 09:21:25.900977 | orchestrator | 2025-02-10 09:21:25 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:21:25.901681 | orchestrator | 2025-02-10 09:21:25 | INFO  | Task a627d320-9f36-45f7-b6f8-6eebb79ece4f is in state SUCCESS 2025-02-10 09:21:25.902755 | orchestrator | 2025-02-10 09:21:25 | INFO  | Task 335664ec-bdec-4b15-93da-63e1d40d2b77 is in state STARTED 2025-02-10 09:21:25.908869 | orchestrator | 2025-02-10 09:21:25 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:21:28.942271 | orchestrator | 2025-02-10 09:21:25 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:21:28.942438 | orchestrator | 2025-02-10 09:21:28 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:21:28.942574 | orchestrator | 2025-02-10 09:21:28 | INFO  | Task d666a261-d102-416d-926b-3db92b001c0e is in state STARTED 2025-02-10 09:21:28.942601 | orchestrator | 2025-02-10 09:21:28 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:21:28.943316 | orchestrator | 2025-02-10 09:21:28 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:21:28.944734 | orchestrator | 2025-02-10 09:21:28 | INFO  | Task 335664ec-bdec-4b15-93da-63e1d40d2b77 is in state STARTED 2025-02-10 09:21:28.945296 | orchestrator | 2025-02-10 09:21:28 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:21:31.979162 | orchestrator | 2025-02-10 09:21:28 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:21:31.979299 | orchestrator | 2025-02-10 09:21:31 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:21:31.979726 | orchestrator | 2025-02-10 09:21:31 | INFO  | Task d666a261-d102-416d-926b-3db92b001c0e is in state STARTED 2025-02-10 09:21:31.980344 | orchestrator | 2025-02-10 09:21:31 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:21:31.982606 | orchestrator | 2025-02-10 09:21:31 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:21:31.983258 | orchestrator | 2025-02-10 09:21:31 | INFO  | Task 335664ec-bdec-4b15-93da-63e1d40d2b77 is in state STARTED 2025-02-10 09:21:31.983901 | orchestrator | 2025-02-10 09:21:31 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:21:31.983991 | orchestrator | 2025-02-10 09:21:31 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:21:35.023096 | orchestrator | 2025-02-10 09:21:35 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:21:35.023547 | orchestrator | 2025-02-10 09:21:35 | INFO  | Task d666a261-d102-416d-926b-3db92b001c0e is in state STARTED 2025-02-10 09:21:35.025250 | orchestrator | 2025-02-10 09:21:35 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:21:35.026471 | orchestrator | 2025-02-10 09:21:35 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:21:35.027686 | orchestrator | 2025-02-10 09:21:35.028283 | orchestrator | 2025-02-10 09:21:35.028327 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:21:35.028343 | orchestrator | 2025-02-10 09:21:35.028359 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:21:35.028374 | orchestrator | Monday 10 February 2025 09:21:01 +0000 (0:00:00.447) 0:00:00.447 ******* 2025-02-10 09:21:35.028388 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:21:35.028404 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:21:35.028418 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:21:35.028432 | orchestrator | 2025-02-10 09:21:35.028446 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:21:35.028460 | orchestrator | Monday 10 February 2025 09:21:02 +0000 (0:00:00.769) 0:00:01.217 ******* 2025-02-10 09:21:35.028474 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-02-10 09:21:35.028488 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-02-10 09:21:35.028502 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-02-10 09:21:35.028516 | orchestrator | 2025-02-10 09:21:35.028529 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-02-10 09:21:35.028543 | orchestrator | 2025-02-10 09:21:35.028557 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-02-10 09:21:35.028571 | orchestrator | Monday 10 February 2025 09:21:03 +0000 (0:00:00.474) 0:00:01.692 ******* 2025-02-10 09:21:35.028585 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:21:35.028601 | orchestrator | 2025-02-10 09:21:35.028614 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-02-10 09:21:35.028659 | orchestrator | Monday 10 February 2025 09:21:04 +0000 (0:00:01.170) 0:00:02.862 ******* 2025-02-10 09:21:35.028673 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-02-10 09:21:35.028688 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-02-10 09:21:35.028702 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-02-10 09:21:35.028715 | orchestrator | 2025-02-10 09:21:35.028729 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-02-10 09:21:35.028776 | orchestrator | Monday 10 February 2025 09:21:05 +0000 (0:00:00.932) 0:00:03.794 ******* 2025-02-10 09:21:35.028791 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-02-10 09:21:35.028805 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-02-10 09:21:35.028819 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-02-10 09:21:35.028833 | orchestrator | 2025-02-10 09:21:35.028847 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-02-10 09:21:35.028860 | orchestrator | Monday 10 February 2025 09:21:09 +0000 (0:00:04.473) 0:00:08.268 ******* 2025-02-10 09:21:35.028874 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:21:35.028906 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:21:35.028922 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:21:35.028938 | orchestrator | 2025-02-10 09:21:35.028960 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-02-10 09:21:35.028976 | orchestrator | Monday 10 February 2025 09:21:15 +0000 (0:00:05.790) 0:00:14.059 ******* 2025-02-10 09:21:35.028992 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:21:35.029007 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:21:35.029024 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:21:35.029041 | orchestrator | 2025-02-10 09:21:35.029056 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:21:35.029073 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:21:35.029090 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:21:35.029107 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:21:35.029123 | orchestrator | 2025-02-10 09:21:35.029138 | orchestrator | 2025-02-10 09:21:35.029154 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:21:35.029170 | orchestrator | Monday 10 February 2025 09:21:23 +0000 (0:00:08.141) 0:00:22.200 ******* 2025-02-10 09:21:35.029186 | orchestrator | =============================================================================== 2025-02-10 09:21:35.029199 | orchestrator | memcached : Restart memcached container --------------------------------- 8.14s 2025-02-10 09:21:35.029213 | orchestrator | memcached : Check memcached container ----------------------------------- 5.79s 2025-02-10 09:21:35.029227 | orchestrator | memcached : Copying over config.json files for services ----------------- 4.47s 2025-02-10 09:21:35.029241 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.17s 2025-02-10 09:21:35.029255 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.93s 2025-02-10 09:21:35.029268 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.77s 2025-02-10 09:21:35.029281 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.47s 2025-02-10 09:21:35.029295 | orchestrator | 2025-02-10 09:21:35.029309 | orchestrator | 2025-02-10 09:21:35 | INFO  | Task 335664ec-bdec-4b15-93da-63e1d40d2b77 is in state SUCCESS 2025-02-10 09:21:35.029332 | orchestrator | 2025-02-10 09:21:35.029346 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:21:35.029360 | orchestrator | 2025-02-10 09:21:35.029374 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:21:35.029388 | orchestrator | Monday 10 February 2025 09:21:01 +0000 (0:00:00.754) 0:00:00.754 ******* 2025-02-10 09:21:35.029401 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:21:35.029415 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:21:35.029429 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:21:35.029443 | orchestrator | 2025-02-10 09:21:35.029468 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:21:35.029490 | orchestrator | Monday 10 February 2025 09:21:01 +0000 (0:00:00.502) 0:00:01.257 ******* 2025-02-10 09:21:35.029528 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-02-10 09:21:35.029553 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-02-10 09:21:35.029576 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-02-10 09:21:35.029599 | orchestrator | 2025-02-10 09:21:35.029623 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-02-10 09:21:35.029672 | orchestrator | 2025-02-10 09:21:35.029696 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-02-10 09:21:35.029720 | orchestrator | Monday 10 February 2025 09:21:02 +0000 (0:00:00.450) 0:00:01.708 ******* 2025-02-10 09:21:35.029743 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:21:35.029767 | orchestrator | 2025-02-10 09:21:35.029787 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-02-10 09:21:35.029801 | orchestrator | Monday 10 February 2025 09:21:03 +0000 (0:00:01.458) 0:00:03.166 ******* 2025-02-10 09:21:35.029817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-10 09:21:35.029838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-10 09:21:35.029854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-10 09:21:35.029869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-10 09:21:35.029906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-10 09:21:35.029933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-10 09:21:35.029948 | orchestrator | 2025-02-10 09:21:35.029962 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-02-10 09:21:35.029976 | orchestrator | Monday 10 February 2025 09:21:05 +0000 (0:00:01.944) 0:00:05.111 ******* 2025-02-10 09:21:35.029990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-10 09:21:35.030005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-10 09:21:35.030078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-10 09:21:35.030097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-10 09:21:35.030122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-10 09:21:35.030147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-10 09:21:35.030161 | orchestrator | 2025-02-10 09:21:35.030176 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-02-10 09:21:35.030189 | orchestrator | Monday 10 February 2025 09:21:10 +0000 (0:00:04.791) 0:00:09.903 ******* 2025-02-10 09:21:35.030203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-10 09:21:35.030217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-10 09:21:35.030232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-10 09:21:35.030247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-10 09:21:35.030268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-10 09:21:35.030292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-10 09:21:35.030307 | orchestrator | 2025-02-10 09:21:35.030321 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-02-10 09:21:35.030335 | orchestrator | Monday 10 February 2025 09:21:16 +0000 (0:00:05.912) 0:00:15.815 ******* 2025-02-10 09:21:35.030349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-10 09:21:35.030364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-10 09:21:35.030379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-10 09:21:35.030393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-10 09:21:35.030421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-10 09:21:38.089358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-10 09:21:38.089498 | orchestrator | 2025-02-10 09:21:38.089521 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-02-10 09:21:38.089538 | orchestrator | Monday 10 February 2025 09:21:18 +0000 (0:00:02.452) 0:00:18.267 ******* 2025-02-10 09:21:38.089552 | orchestrator | 2025-02-10 09:21:38.089567 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-02-10 09:21:38.089580 | orchestrator | Monday 10 February 2025 09:21:18 +0000 (0:00:00.090) 0:00:18.358 ******* 2025-02-10 09:21:38.089594 | orchestrator | 2025-02-10 09:21:38.089608 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-02-10 09:21:38.089622 | orchestrator | Monday 10 February 2025 09:21:18 +0000 (0:00:00.078) 0:00:18.437 ******* 2025-02-10 09:21:38.089675 | orchestrator | 2025-02-10 09:21:38.089690 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-02-10 09:21:38.089703 | orchestrator | Monday 10 February 2025 09:21:18 +0000 (0:00:00.073) 0:00:18.511 ******* 2025-02-10 09:21:38.089717 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:21:38.089733 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:21:38.089747 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:21:38.089786 | orchestrator | 2025-02-10 09:21:38.089800 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-02-10 09:21:38.089814 | orchestrator | Monday 10 February 2025 09:21:22 +0000 (0:00:03.955) 0:00:22.466 ******* 2025-02-10 09:21:38.089828 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:21:38.089842 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:21:38.089856 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:21:38.089870 | orchestrator | 2025-02-10 09:21:38.089981 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:21:38.089998 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:21:38.090091 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:21:38.090119 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:21:38.090145 | orchestrator | 2025-02-10 09:21:38.090170 | orchestrator | 2025-02-10 09:21:38.090191 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:21:38.090237 | orchestrator | Monday 10 February 2025 09:21:33 +0000 (0:00:10.301) 0:00:32.768 ******* 2025-02-10 09:21:38.090252 | orchestrator | =============================================================================== 2025-02-10 09:21:38.090266 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.30s 2025-02-10 09:21:38.090280 | orchestrator | redis : Copying over redis config files --------------------------------- 5.91s 2025-02-10 09:21:38.090293 | orchestrator | redis : Copying over default config.json files -------------------------- 4.79s 2025-02-10 09:21:38.090307 | orchestrator | redis : Restart redis container ----------------------------------------- 3.96s 2025-02-10 09:21:38.090320 | orchestrator | redis : Check redis containers ------------------------------------------ 2.45s 2025-02-10 09:21:38.090334 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.94s 2025-02-10 09:21:38.090348 | orchestrator | redis : include_tasks --------------------------------------------------- 1.46s 2025-02-10 09:21:38.090361 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.50s 2025-02-10 09:21:38.090375 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.45s 2025-02-10 09:21:38.090388 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.24s 2025-02-10 09:21:38.090403 | orchestrator | 2025-02-10 09:21:35 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:21:38.090417 | orchestrator | 2025-02-10 09:21:35 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:21:38.090453 | orchestrator | 2025-02-10 09:21:38 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:21:38.090940 | orchestrator | 2025-02-10 09:21:38 | INFO  | Task d666a261-d102-416d-926b-3db92b001c0e is in state STARTED 2025-02-10 09:21:38.090979 | orchestrator | 2025-02-10 09:21:38 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:21:38.091707 | orchestrator | 2025-02-10 09:21:38 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:21:38.092943 | orchestrator | 2025-02-10 09:21:38 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:21:41.157780 | orchestrator | 2025-02-10 09:21:38 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:21:41.157955 | orchestrator | 2025-02-10 09:21:41 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:21:41.158119 | orchestrator | 2025-02-10 09:21:41 | INFO  | Task d666a261-d102-416d-926b-3db92b001c0e is in state STARTED 2025-02-10 09:21:41.158690 | orchestrator | 2025-02-10 09:21:41 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:21:41.159275 | orchestrator | 2025-02-10 09:21:41 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:21:41.160565 | orchestrator | 2025-02-10 09:21:41 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:21:44.218700 | orchestrator | 2025-02-10 09:21:41 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:21:44.218853 | orchestrator | 2025-02-10 09:21:44 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:21:44.224155 | orchestrator | 2025-02-10 09:21:44 | INFO  | Task d666a261-d102-416d-926b-3db92b001c0e is in state STARTED 2025-02-10 09:21:44.227974 | orchestrator | 2025-02-10 09:21:44 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:21:44.230354 | orchestrator | 2025-02-10 09:21:44 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:21:44.235400 | orchestrator | 2025-02-10 09:21:44 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:21:47.277782 | orchestrator | 2025-02-10 09:21:44 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:21:47.277922 | orchestrator | 2025-02-10 09:21:47 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:21:47.277985 | orchestrator | 2025-02-10 09:21:47 | INFO  | Task d666a261-d102-416d-926b-3db92b001c0e is in state STARTED 2025-02-10 09:21:47.278000 | orchestrator | 2025-02-10 09:21:47 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:21:47.278063 | orchestrator | 2025-02-10 09:21:47 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:21:47.279538 | orchestrator | 2025-02-10 09:21:47 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:21:47.279762 | orchestrator | 2025-02-10 09:21:47 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:21:50.325616 | orchestrator | 2025-02-10 09:21:50 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:21:50.328708 | orchestrator | 2025-02-10 09:21:50 | INFO  | Task d666a261-d102-416d-926b-3db92b001c0e is in state STARTED 2025-02-10 09:21:50.336379 | orchestrator | 2025-02-10 09:21:50 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:21:53.395461 | orchestrator | 2025-02-10 09:21:50 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:21:53.395711 | orchestrator | 2025-02-10 09:21:50 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:21:53.395750 | orchestrator | 2025-02-10 09:21:50 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:21:53.395801 | orchestrator | 2025-02-10 09:21:53 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:21:53.397454 | orchestrator | 2025-02-10 09:21:53 | INFO  | Task d666a261-d102-416d-926b-3db92b001c0e is in state STARTED 2025-02-10 09:21:53.397497 | orchestrator | 2025-02-10 09:21:53 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:21:53.401937 | orchestrator | 2025-02-10 09:21:53 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:21:53.404575 | orchestrator | 2025-02-10 09:21:53 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:21:56.454867 | orchestrator | 2025-02-10 09:21:53 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:21:56.455024 | orchestrator | 2025-02-10 09:21:56 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:21:56.468791 | orchestrator | 2025-02-10 09:21:56 | INFO  | Task d666a261-d102-416d-926b-3db92b001c0e is in state STARTED 2025-02-10 09:21:56.482780 | orchestrator | 2025-02-10 09:21:56 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:21:56.482878 | orchestrator | 2025-02-10 09:21:56 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:21:56.482917 | orchestrator | 2025-02-10 09:21:56 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:21:59.587456 | orchestrator | 2025-02-10 09:21:56 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:21:59.587623 | orchestrator | 2025-02-10 09:21:59 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:21:59.588296 | orchestrator | 2025-02-10 09:21:59 | INFO  | Task d666a261-d102-416d-926b-3db92b001c0e is in state STARTED 2025-02-10 09:21:59.588331 | orchestrator | 2025-02-10 09:21:59 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:21:59.588387 | orchestrator | 2025-02-10 09:21:59 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:21:59.590430 | orchestrator | 2025-02-10 09:21:59 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:21:59.590845 | orchestrator | 2025-02-10 09:21:59 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:22:02.647066 | orchestrator | 2025-02-10 09:22:02 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:22:02.648887 | orchestrator | 2025-02-10 09:22:02 | INFO  | Task d666a261-d102-416d-926b-3db92b001c0e is in state STARTED 2025-02-10 09:22:02.651377 | orchestrator | 2025-02-10 09:22:02 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:22:02.651482 | orchestrator | 2025-02-10 09:22:02 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:22:02.652201 | orchestrator | 2025-02-10 09:22:02 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:22:05.720452 | orchestrator | 2025-02-10 09:22:02 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:22:05.720619 | orchestrator | 2025-02-10 09:22:05 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:22:05.720753 | orchestrator | 2025-02-10 09:22:05 | INFO  | Task d666a261-d102-416d-926b-3db92b001c0e is in state STARTED 2025-02-10 09:22:05.720782 | orchestrator | 2025-02-10 09:22:05 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:22:05.721553 | orchestrator | 2025-02-10 09:22:05 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:22:05.722744 | orchestrator | 2025-02-10 09:22:05 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:22:08.831036 | orchestrator | 2025-02-10 09:22:05 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:22:08.831198 | orchestrator | 2025-02-10 09:22:08 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:22:08.834560 | orchestrator | 2025-02-10 09:22:08 | INFO  | Task d666a261-d102-416d-926b-3db92b001c0e is in state STARTED 2025-02-10 09:22:08.843851 | orchestrator | 2025-02-10 09:22:08 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:22:08.844498 | orchestrator | 2025-02-10 09:22:08 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:22:08.849154 | orchestrator | 2025-02-10 09:22:08 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:22:08.855915 | orchestrator | 2025-02-10 09:22:08 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:22:11.933619 | orchestrator | 2025-02-10 09:22:11 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:22:11.936508 | orchestrator | 2025-02-10 09:22:11 | INFO  | Task d666a261-d102-416d-926b-3db92b001c0e is in state STARTED 2025-02-10 09:22:11.936603 | orchestrator | 2025-02-10 09:22:11 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:22:11.942345 | orchestrator | 2025-02-10 09:22:11 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:22:11.942579 | orchestrator | 2025-02-10 09:22:11 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:22:15.007953 | orchestrator | 2025-02-10 09:22:11 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:22:15.008070 | orchestrator | 2025-02-10 09:22:15 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:22:15.008486 | orchestrator | 2025-02-10 09:22:15 | INFO  | Task d666a261-d102-416d-926b-3db92b001c0e is in state STARTED 2025-02-10 09:22:15.009445 | orchestrator | 2025-02-10 09:22:15 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:22:15.011046 | orchestrator | 2025-02-10 09:22:15 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:22:15.011475 | orchestrator | 2025-02-10 09:22:15 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:22:15.011715 | orchestrator | 2025-02-10 09:22:15 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:22:18.080136 | orchestrator | 2025-02-10 09:22:18 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:22:18.086789 | orchestrator | 2025-02-10 09:22:18 | INFO  | Task d666a261-d102-416d-926b-3db92b001c0e is in state STARTED 2025-02-10 09:22:18.087581 | orchestrator | 2025-02-10 09:22:18 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:22:18.091959 | orchestrator | 2025-02-10 09:22:18 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:22:18.093636 | orchestrator | 2025-02-10 09:22:18 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:22:21.137884 | orchestrator | 2025-02-10 09:22:18 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:22:21.138009 | orchestrator | 2025-02-10 09:22:21 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:22:21.146154 | orchestrator | 2025-02-10 09:22:21 | INFO  | Task d666a261-d102-416d-926b-3db92b001c0e is in state STARTED 2025-02-10 09:22:21.148169 | orchestrator | 2025-02-10 09:22:21 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:22:21.148195 | orchestrator | 2025-02-10 09:22:21 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:22:21.148209 | orchestrator | 2025-02-10 09:22:21 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:22:24.204067 | orchestrator | 2025-02-10 09:22:21 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:22:24.204243 | orchestrator | 2025-02-10 09:22:24 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:22:24.206801 | orchestrator | 2025-02-10 09:22:24 | INFO  | Task d666a261-d102-416d-926b-3db92b001c0e is in state STARTED 2025-02-10 09:22:24.206842 | orchestrator | 2025-02-10 09:22:24 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:22:24.206866 | orchestrator | 2025-02-10 09:22:24 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:22:27.335282 | orchestrator | 2025-02-10 09:22:24 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:22:27.335422 | orchestrator | 2025-02-10 09:22:24 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:22:27.335465 | orchestrator | 2025-02-10 09:22:27 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:22:27.337906 | orchestrator | 2025-02-10 09:22:27 | INFO  | Task d666a261-d102-416d-926b-3db92b001c0e is in state STARTED 2025-02-10 09:22:30.382232 | orchestrator | 2025-02-10 09:22:27 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:22:30.382378 | orchestrator | 2025-02-10 09:22:27 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:22:30.382401 | orchestrator | 2025-02-10 09:22:27 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:22:30.382422 | orchestrator | 2025-02-10 09:22:27 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:22:30.382493 | orchestrator | 2025-02-10 09:22:30 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:22:30.383283 | orchestrator | 2025-02-10 09:22:30 | INFO  | Task d666a261-d102-416d-926b-3db92b001c0e is in state STARTED 2025-02-10 09:22:30.383316 | orchestrator | 2025-02-10 09:22:30 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:22:30.384875 | orchestrator | 2025-02-10 09:22:30 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:22:30.385871 | orchestrator | 2025-02-10 09:22:30 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:22:33.413293 | orchestrator | 2025-02-10 09:22:30 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:22:33.413507 | orchestrator | 2025-02-10 09:22:33 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:22:33.413592 | orchestrator | 2025-02-10 09:22:33 | INFO  | Task d666a261-d102-416d-926b-3db92b001c0e is in state STARTED 2025-02-10 09:22:33.413618 | orchestrator | 2025-02-10 09:22:33 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:22:33.413947 | orchestrator | 2025-02-10 09:22:33 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:22:33.414533 | orchestrator | 2025-02-10 09:22:33 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:22:36.454268 | orchestrator | 2025-02-10 09:22:33 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:22:36.454427 | orchestrator | 2025-02-10 09:22:36 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:22:36.456600 | orchestrator | 2025-02-10 09:22:36 | INFO  | Task d666a261-d102-416d-926b-3db92b001c0e is in state SUCCESS 2025-02-10 09:22:36.456651 | orchestrator | 2025-02-10 09:22:36.456717 | orchestrator | 2025-02-10 09:22:36.456732 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:22:36.456747 | orchestrator | 2025-02-10 09:22:36.456760 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:22:36.456775 | orchestrator | Monday 10 February 2025 09:21:01 +0000 (0:00:00.386) 0:00:00.386 ******* 2025-02-10 09:22:36.456788 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:22:36.456804 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:22:36.456817 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:22:36.456831 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:22:36.456844 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:22:36.456858 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:22:36.456871 | orchestrator | 2025-02-10 09:22:36.456884 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:22:36.456898 | orchestrator | Monday 10 February 2025 09:21:02 +0000 (0:00:01.263) 0:00:01.649 ******* 2025-02-10 09:22:36.456912 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-02-10 09:22:36.456925 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-02-10 09:22:36.456939 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-02-10 09:22:36.456952 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-02-10 09:22:36.456966 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-02-10 09:22:36.456997 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-02-10 09:22:36.457010 | orchestrator | 2025-02-10 09:22:36.457023 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-02-10 09:22:36.457035 | orchestrator | 2025-02-10 09:22:36.457048 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-02-10 09:22:36.457086 | orchestrator | Monday 10 February 2025 09:21:04 +0000 (0:00:01.620) 0:00:03.270 ******* 2025-02-10 09:22:36.457100 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:22:36.457114 | orchestrator | 2025-02-10 09:22:36.457127 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-02-10 09:22:36.457139 | orchestrator | Monday 10 February 2025 09:21:07 +0000 (0:00:02.619) 0:00:05.890 ******* 2025-02-10 09:22:36.457152 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-02-10 09:22:36.457165 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-02-10 09:22:36.457178 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-02-10 09:22:36.457190 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-02-10 09:22:36.457203 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-02-10 09:22:36.457217 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-02-10 09:22:36.457231 | orchestrator | 2025-02-10 09:22:36.457245 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-02-10 09:22:36.457259 | orchestrator | Monday 10 February 2025 09:21:10 +0000 (0:00:03.732) 0:00:09.622 ******* 2025-02-10 09:22:36.457278 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-02-10 09:22:36.457293 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-02-10 09:22:36.457307 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-02-10 09:22:36.457321 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-02-10 09:22:36.457335 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-02-10 09:22:36.457348 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-02-10 09:22:36.457361 | orchestrator | 2025-02-10 09:22:36.457375 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-02-10 09:22:36.457389 | orchestrator | Monday 10 February 2025 09:21:15 +0000 (0:00:04.430) 0:00:14.054 ******* 2025-02-10 09:22:36.457403 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-02-10 09:22:36.457417 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:22:36.457432 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-02-10 09:22:36.457446 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:22:36.457460 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-02-10 09:22:36.457474 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:22:36.457489 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-02-10 09:22:36.457503 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:22:36.457517 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-02-10 09:22:36.457529 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:22:36.457542 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-02-10 09:22:36.457554 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:22:36.457567 | orchestrator | 2025-02-10 09:22:36.457580 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-02-10 09:22:36.457593 | orchestrator | Monday 10 February 2025 09:21:17 +0000 (0:00:02.039) 0:00:16.094 ******* 2025-02-10 09:22:36.457606 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:22:36.457618 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:22:36.457631 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:22:36.457644 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:22:36.457657 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:22:36.457694 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:22:36.457707 | orchestrator | 2025-02-10 09:22:36.457720 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-02-10 09:22:36.457772 | orchestrator | Monday 10 February 2025 09:21:18 +0000 (0:00:00.872) 0:00:16.966 ******* 2025-02-10 09:22:36.457803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-10 09:22:36.457828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-10 09:22:36.457842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-10 09:22:36.457855 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-10 09:22:36.457868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-10 09:22:36.457887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-10 09:22:36.457908 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-10 09:22:36.457921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-10 09:22:36.457934 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-10 09:22:36.457946 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-10 09:22:36.457959 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-10 09:22:36.457987 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-10 09:22:36.458000 | orchestrator | 2025-02-10 09:22:36.458013 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-02-10 09:22:36.458072 | orchestrator | Monday 10 February 2025 09:21:20 +0000 (0:00:02.396) 0:00:19.363 ******* 2025-02-10 09:22:36.458085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-10 09:22:36.458134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-10 09:22:36.458148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-10 09:22:36.458162 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-10 09:22:36.458182 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-10 09:22:36.458203 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-10 09:22:36.458226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-10 09:22:36.458239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-10 09:22:36.458252 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-10 09:22:36.458265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-10 09:22:36.458301 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-10 09:22:36.458315 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-10 09:22:36.458328 | orchestrator | 2025-02-10 09:22:36.458340 | orchestrator | TASK [openvswitch : Copying over start-ovs file for openvswitch-vswitchd] ****** 2025-02-10 09:22:36.458353 | orchestrator | Monday 10 February 2025 09:21:23 +0000 (0:00:03.071) 0:00:22.434 ******* 2025-02-10 09:22:36.458365 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:22:36.458419 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:22:36.458434 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:22:36.458447 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:22:36.458460 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:22:36.458472 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:22:36.458485 | orchestrator | 2025-02-10 09:22:36.458498 | orchestrator | TASK [openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server] *** 2025-02-10 09:22:36.458511 | orchestrator | Monday 10 February 2025 09:21:27 +0000 (0:00:03.873) 0:00:26.308 ******* 2025-02-10 09:22:36.458523 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:22:36.458536 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:22:36.458548 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:22:36.458561 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:22:36.458574 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:22:36.458587 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:22:36.458599 | orchestrator | 2025-02-10 09:22:36.458612 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-02-10 09:22:36.458625 | orchestrator | Monday 10 February 2025 09:21:30 +0000 (0:00:02.974) 0:00:29.282 ******* 2025-02-10 09:22:36.458638 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:22:36.458650 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:22:36.458690 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:22:36.458704 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:22:36.458717 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:22:36.458729 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:22:36.458742 | orchestrator | 2025-02-10 09:22:36.458755 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-02-10 09:22:36.458768 | orchestrator | Monday 10 February 2025 09:21:31 +0000 (0:00:01.310) 0:00:30.593 ******* 2025-02-10 09:22:36.458793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-10 09:22:36.458817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-10 09:22:36.458853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-10 09:22:36.458868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-10 09:22:36.458882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-10 09:22:36.458905 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-10 09:22:36.458926 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-10 09:22:36.458946 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-10 09:22:36.458960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-10 09:22:36.458973 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-10 09:22:36.458995 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-10 09:22:36.459017 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-10 09:22:36.459031 | orchestrator | 2025-02-10 09:22:36.459045 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-02-10 09:22:36.459058 | orchestrator | Monday 10 February 2025 09:21:35 +0000 (0:00:03.173) 0:00:33.766 ******* 2025-02-10 09:22:36.459072 | orchestrator | 2025-02-10 09:22:36.459086 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-02-10 09:22:36.459099 | orchestrator | Monday 10 February 2025 09:21:35 +0000 (0:00:00.133) 0:00:33.899 ******* 2025-02-10 09:22:36.459113 | orchestrator | 2025-02-10 09:22:36.459126 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-02-10 09:22:36.459139 | orchestrator | Monday 10 February 2025 09:21:35 +0000 (0:00:00.384) 0:00:34.284 ******* 2025-02-10 09:22:36.459152 | orchestrator | 2025-02-10 09:22:36.459164 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-02-10 09:22:36.459178 | orchestrator | Monday 10 February 2025 09:21:35 +0000 (0:00:00.314) 0:00:34.599 ******* 2025-02-10 09:22:36.459190 | orchestrator | 2025-02-10 09:22:36.459207 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-02-10 09:22:36.459220 | orchestrator | Monday 10 February 2025 09:21:36 +0000 (0:00:00.546) 0:00:35.145 ******* 2025-02-10 09:22:36.459233 | orchestrator | 2025-02-10 09:22:36.459246 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-02-10 09:22:36.459258 | orchestrator | Monday 10 February 2025 09:21:36 +0000 (0:00:00.307) 0:00:35.453 ******* 2025-02-10 09:22:36.459271 | orchestrator | 2025-02-10 09:22:36.459284 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-02-10 09:22:36.459297 | orchestrator | Monday 10 February 2025 09:21:37 +0000 (0:00:00.402) 0:00:35.855 ******* 2025-02-10 09:22:36.459310 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:22:36.459322 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:22:36.459341 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:22:36.459356 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:22:36.459404 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:22:36.459418 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:22:36.459431 | orchestrator | 2025-02-10 09:22:36.459443 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-02-10 09:22:36.459456 | orchestrator | Monday 10 February 2025 09:21:49 +0000 (0:00:12.678) 0:00:48.534 ******* 2025-02-10 09:22:36.459468 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:22:36.459480 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:22:36.459493 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:22:36.459505 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:22:36.459517 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:22:36.459529 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:22:36.459542 | orchestrator | 2025-02-10 09:22:36.459555 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-02-10 09:22:36.459567 | orchestrator | Monday 10 February 2025 09:21:53 +0000 (0:00:03.563) 0:00:52.098 ******* 2025-02-10 09:22:36.459580 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:22:36.459594 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:22:36.459608 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:22:36.459621 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:22:36.459641 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:22:36.459653 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:22:36.459721 | orchestrator | 2025-02-10 09:22:36.459735 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-02-10 09:22:36.459747 | orchestrator | Monday 10 February 2025 09:22:04 +0000 (0:00:10.942) 0:01:03.040 ******* 2025-02-10 09:22:36.459760 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-02-10 09:22:36.459772 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-02-10 09:22:36.459785 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-02-10 09:22:36.459797 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-02-10 09:22:36.459810 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-02-10 09:22:36.459822 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-02-10 09:22:36.459834 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-02-10 09:22:36.459847 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-02-10 09:22:36.459859 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-02-10 09:22:36.459872 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-02-10 09:22:36.459884 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-02-10 09:22:36.459896 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-02-10 09:22:36.459909 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-02-10 09:22:36.459921 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-02-10 09:22:36.459934 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-02-10 09:22:36.459946 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-02-10 09:22:36.459964 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-02-10 09:22:36.459977 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-02-10 09:22:36.459990 | orchestrator | 2025-02-10 09:22:36.460002 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-02-10 09:22:36.460015 | orchestrator | Monday 10 February 2025 09:22:16 +0000 (0:00:12.014) 0:01:15.055 ******* 2025-02-10 09:22:36.460027 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-02-10 09:22:36.460039 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:22:36.460052 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-02-10 09:22:36.460064 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:22:36.460076 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-02-10 09:22:36.460088 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:22:36.460101 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-02-10 09:22:36.460114 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-02-10 09:22:36.460126 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-02-10 09:22:36.460138 | orchestrator | 2025-02-10 09:22:36.460163 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-02-10 09:22:36.460176 | orchestrator | Monday 10 February 2025 09:22:18 +0000 (0:00:02.596) 0:01:17.651 ******* 2025-02-10 09:22:36.460188 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-02-10 09:22:36.460201 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:22:36.460222 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-02-10 09:22:36.460349 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:22:36.460365 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-02-10 09:22:36.460377 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:22:36.460390 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-02-10 09:22:36.460402 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-02-10 09:22:36.460415 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-02-10 09:22:36.460427 | orchestrator | 2025-02-10 09:22:36.460440 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-02-10 09:22:36.460452 | orchestrator | Monday 10 February 2025 09:22:23 +0000 (0:00:04.382) 0:01:22.034 ******* 2025-02-10 09:22:36.460465 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:22:36.460477 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:22:36.460489 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:22:36.460501 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:22:36.460514 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:22:36.460526 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:22:36.460538 | orchestrator | 2025-02-10 09:22:36.460550 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:22:36.460563 | orchestrator | testbed-node-0 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-10 09:22:36.460578 | orchestrator | testbed-node-1 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-10 09:22:36.460590 | orchestrator | testbed-node-2 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-10 09:22:36.460603 | orchestrator | testbed-node-3 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-10 09:22:36.460615 | orchestrator | testbed-node-4 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-10 09:22:36.460633 | orchestrator | testbed-node-5 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-10 09:22:36.460646 | orchestrator | 2025-02-10 09:22:36.460659 | orchestrator | 2025-02-10 09:22:36.460690 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:22:36.460703 | orchestrator | Monday 10 February 2025 09:22:33 +0000 (0:00:10.515) 0:01:32.550 ******* 2025-02-10 09:22:36.460715 | orchestrator | =============================================================================== 2025-02-10 09:22:36.460727 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 21.46s 2025-02-10 09:22:36.460739 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 12.68s 2025-02-10 09:22:36.460752 | orchestrator | openvswitch : Set system-id, hostname and hw-offload ------------------- 12.01s 2025-02-10 09:22:36.460764 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 4.43s 2025-02-10 09:22:36.460776 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.38s 2025-02-10 09:22:36.460788 | orchestrator | openvswitch : Copying over start-ovs file for openvswitch-vswitchd ------ 3.87s 2025-02-10 09:22:36.460801 | orchestrator | module-load : Load modules ---------------------------------------------- 3.73s 2025-02-10 09:22:36.460813 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 3.56s 2025-02-10 09:22:36.460833 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.17s 2025-02-10 09:22:36.460845 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.07s 2025-02-10 09:22:36.460862 | orchestrator | openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server --- 2.97s 2025-02-10 09:22:36.460875 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.62s 2025-02-10 09:22:36.460887 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.60s 2025-02-10 09:22:36.460899 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.40s 2025-02-10 09:22:36.460912 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 2.09s 2025-02-10 09:22:36.460924 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.04s 2025-02-10 09:22:36.460936 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.62s 2025-02-10 09:22:36.460948 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.31s 2025-02-10 09:22:36.460960 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.26s 2025-02-10 09:22:36.460973 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.87s 2025-02-10 09:22:36.460986 | orchestrator | 2025-02-10 09:22:36 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:22:36.461000 | orchestrator | 2025-02-10 09:22:36 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:22:36.461020 | orchestrator | 2025-02-10 09:22:36 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:22:39.501171 | orchestrator | 2025-02-10 09:22:36 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:22:39.501305 | orchestrator | 2025-02-10 09:22:36 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:22:39.501346 | orchestrator | 2025-02-10 09:22:39 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:22:42.557641 | orchestrator | 2025-02-10 09:22:39 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:22:42.557934 | orchestrator | 2025-02-10 09:22:39 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:22:42.557960 | orchestrator | 2025-02-10 09:22:39 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:22:42.557975 | orchestrator | 2025-02-10 09:22:39 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:22:42.557991 | orchestrator | 2025-02-10 09:22:39 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:22:42.558082 | orchestrator | 2025-02-10 09:22:42 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:22:42.558417 | orchestrator | 2025-02-10 09:22:42 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:22:42.558557 | orchestrator | 2025-02-10 09:22:42 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:22:42.559934 | orchestrator | 2025-02-10 09:22:42 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:22:42.561430 | orchestrator | 2025-02-10 09:22:42 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:22:45.617455 | orchestrator | 2025-02-10 09:22:42 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:22:45.617566 | orchestrator | 2025-02-10 09:22:45 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:22:45.617843 | orchestrator | 2025-02-10 09:22:45 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:22:45.617880 | orchestrator | 2025-02-10 09:22:45 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:22:45.617889 | orchestrator | 2025-02-10 09:22:45 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:22:45.618481 | orchestrator | 2025-02-10 09:22:45 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:22:45.618617 | orchestrator | 2025-02-10 09:22:45 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:22:48.678218 | orchestrator | 2025-02-10 09:22:48 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:22:48.682610 | orchestrator | 2025-02-10 09:22:48 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:22:48.682721 | orchestrator | 2025-02-10 09:22:48 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:22:48.682755 | orchestrator | 2025-02-10 09:22:48 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:22:51.738326 | orchestrator | 2025-02-10 09:22:48 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:22:51.738466 | orchestrator | 2025-02-10 09:22:48 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:22:51.738507 | orchestrator | 2025-02-10 09:22:51 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:22:51.741316 | orchestrator | 2025-02-10 09:22:51 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:22:51.742165 | orchestrator | 2025-02-10 09:22:51 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:22:51.743003 | orchestrator | 2025-02-10 09:22:51 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:22:51.743910 | orchestrator | 2025-02-10 09:22:51 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:22:54.782367 | orchestrator | 2025-02-10 09:22:51 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:22:54.782512 | orchestrator | 2025-02-10 09:22:54 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:22:54.782846 | orchestrator | 2025-02-10 09:22:54 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:22:54.784397 | orchestrator | 2025-02-10 09:22:54 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:22:54.784856 | orchestrator | 2025-02-10 09:22:54 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:22:54.785601 | orchestrator | 2025-02-10 09:22:54 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:22:54.786572 | orchestrator | 2025-02-10 09:22:54 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:22:57.820597 | orchestrator | 2025-02-10 09:22:57 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:22:57.821044 | orchestrator | 2025-02-10 09:22:57 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:22:57.821873 | orchestrator | 2025-02-10 09:22:57 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:22:57.824230 | orchestrator | 2025-02-10 09:22:57 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:22:57.827375 | orchestrator | 2025-02-10 09:22:57 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:22:57.827577 | orchestrator | 2025-02-10 09:22:57 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:00.884381 | orchestrator | 2025-02-10 09:23:00 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:23:00.885831 | orchestrator | 2025-02-10 09:23:00 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:23:00.885877 | orchestrator | 2025-02-10 09:23:00 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:23:00.891827 | orchestrator | 2025-02-10 09:23:00 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:23:00.897090 | orchestrator | 2025-02-10 09:23:00 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:23:03.954580 | orchestrator | 2025-02-10 09:23:00 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:03.954855 | orchestrator | 2025-02-10 09:23:03 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:23:03.955825 | orchestrator | 2025-02-10 09:23:03 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:23:03.955873 | orchestrator | 2025-02-10 09:23:03 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:23:03.957197 | orchestrator | 2025-02-10 09:23:03 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:23:03.958989 | orchestrator | 2025-02-10 09:23:03 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:23:03.959129 | orchestrator | 2025-02-10 09:23:03 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:07.013567 | orchestrator | 2025-02-10 09:23:07 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:23:07.017265 | orchestrator | 2025-02-10 09:23:07 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:23:07.019158 | orchestrator | 2025-02-10 09:23:07 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:23:07.021451 | orchestrator | 2025-02-10 09:23:07 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:23:07.023428 | orchestrator | 2025-02-10 09:23:07 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:23:07.023632 | orchestrator | 2025-02-10 09:23:07 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:10.126600 | orchestrator | 2025-02-10 09:23:10 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:23:10.127767 | orchestrator | 2025-02-10 09:23:10 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:23:10.129665 | orchestrator | 2025-02-10 09:23:10 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:23:10.131495 | orchestrator | 2025-02-10 09:23:10 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:23:10.133319 | orchestrator | 2025-02-10 09:23:10 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:23:13.187603 | orchestrator | 2025-02-10 09:23:10 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:13.187870 | orchestrator | 2025-02-10 09:23:13 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:23:13.189121 | orchestrator | 2025-02-10 09:23:13 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:23:13.190599 | orchestrator | 2025-02-10 09:23:13 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:23:13.193325 | orchestrator | 2025-02-10 09:23:13 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:23:13.196772 | orchestrator | 2025-02-10 09:23:13 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:23:16.247594 | orchestrator | 2025-02-10 09:23:13 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:16.247809 | orchestrator | 2025-02-10 09:23:16 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:23:16.248984 | orchestrator | 2025-02-10 09:23:16 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:23:16.249019 | orchestrator | 2025-02-10 09:23:16 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:23:16.253100 | orchestrator | 2025-02-10 09:23:16 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:23:16.254086 | orchestrator | 2025-02-10 09:23:16 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:23:19.304009 | orchestrator | 2025-02-10 09:23:16 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:19.304171 | orchestrator | 2025-02-10 09:23:19 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:23:19.304418 | orchestrator | 2025-02-10 09:23:19 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:23:19.308278 | orchestrator | 2025-02-10 09:23:19 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:23:19.309351 | orchestrator | 2025-02-10 09:23:19 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:23:19.311059 | orchestrator | 2025-02-10 09:23:19 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:23:19.311346 | orchestrator | 2025-02-10 09:23:19 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:22.372049 | orchestrator | 2025-02-10 09:23:22 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:23:22.373318 | orchestrator | 2025-02-10 09:23:22 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:23:22.374793 | orchestrator | 2025-02-10 09:23:22 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:23:22.376817 | orchestrator | 2025-02-10 09:23:22 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:23:22.377038 | orchestrator | 2025-02-10 09:23:22 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:23:22.377240 | orchestrator | 2025-02-10 09:23:22 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:25.424919 | orchestrator | 2025-02-10 09:23:25 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:23:25.426206 | orchestrator | 2025-02-10 09:23:25 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:23:25.428314 | orchestrator | 2025-02-10 09:23:25 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:23:25.431129 | orchestrator | 2025-02-10 09:23:25 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:23:25.432634 | orchestrator | 2025-02-10 09:23:25 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:23:25.433032 | orchestrator | 2025-02-10 09:23:25 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:28.484237 | orchestrator | 2025-02-10 09:23:28 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:23:28.484667 | orchestrator | 2025-02-10 09:23:28 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:23:28.489812 | orchestrator | 2025-02-10 09:23:28 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:23:28.494586 | orchestrator | 2025-02-10 09:23:28 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:23:28.494740 | orchestrator | 2025-02-10 09:23:28 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:23:31.575983 | orchestrator | 2025-02-10 09:23:28 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:31.576154 | orchestrator | 2025-02-10 09:23:31 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:23:31.579606 | orchestrator | 2025-02-10 09:23:31 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:23:31.580033 | orchestrator | 2025-02-10 09:23:31 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:23:31.580073 | orchestrator | 2025-02-10 09:23:31 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:23:31.583285 | orchestrator | 2025-02-10 09:23:31 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:23:34.640360 | orchestrator | 2025-02-10 09:23:31 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:34.640517 | orchestrator | 2025-02-10 09:23:34 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:23:34.640905 | orchestrator | 2025-02-10 09:23:34 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:23:34.642206 | orchestrator | 2025-02-10 09:23:34 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:23:34.644589 | orchestrator | 2025-02-10 09:23:34 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:23:34.645354 | orchestrator | 2025-02-10 09:23:34 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:23:37.696375 | orchestrator | 2025-02-10 09:23:34 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:37.696574 | orchestrator | 2025-02-10 09:23:37 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:23:37.696891 | orchestrator | 2025-02-10 09:23:37 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:23:37.699336 | orchestrator | 2025-02-10 09:23:37 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:23:37.699682 | orchestrator | 2025-02-10 09:23:37 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:23:37.700119 | orchestrator | 2025-02-10 09:23:37 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:23:37.703397 | orchestrator | 2025-02-10 09:23:37 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:40.760570 | orchestrator | 2025-02-10 09:23:40 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:23:40.760833 | orchestrator | 2025-02-10 09:23:40 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:23:40.760870 | orchestrator | 2025-02-10 09:23:40 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:23:40.761385 | orchestrator | 2025-02-10 09:23:40 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:23:40.761955 | orchestrator | 2025-02-10 09:23:40 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:23:43.824106 | orchestrator | 2025-02-10 09:23:40 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:43.824290 | orchestrator | 2025-02-10 09:23:43 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:23:43.828725 | orchestrator | 2025-02-10 09:23:43 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:23:43.829485 | orchestrator | 2025-02-10 09:23:43 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:23:43.830118 | orchestrator | 2025-02-10 09:23:43 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:23:43.831055 | orchestrator | 2025-02-10 09:23:43 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:23:46.890079 | orchestrator | 2025-02-10 09:23:43 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:46.890210 | orchestrator | 2025-02-10 09:23:46 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:23:46.892431 | orchestrator | 2025-02-10 09:23:46 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:23:46.894203 | orchestrator | 2025-02-10 09:23:46 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:23:46.894883 | orchestrator | 2025-02-10 09:23:46 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:23:46.895374 | orchestrator | 2025-02-10 09:23:46 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:23:49.952952 | orchestrator | 2025-02-10 09:23:46 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:49.953143 | orchestrator | 2025-02-10 09:23:49 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:23:49.954493 | orchestrator | 2025-02-10 09:23:49 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:23:49.954581 | orchestrator | 2025-02-10 09:23:49 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:23:49.956501 | orchestrator | 2025-02-10 09:23:49 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:23:49.959294 | orchestrator | 2025-02-10 09:23:49 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:23:49.959484 | orchestrator | 2025-02-10 09:23:49 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:53.025492 | orchestrator | 2025-02-10 09:23:53 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:23:53.030183 | orchestrator | 2025-02-10 09:23:53 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:23:53.039421 | orchestrator | 2025-02-10 09:23:53 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:23:53.045387 | orchestrator | 2025-02-10 09:23:53 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:23:53.051006 | orchestrator | 2025-02-10 09:23:53 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state STARTED 2025-02-10 09:23:56.098321 | orchestrator | 2025-02-10 09:23:53 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:56.098480 | orchestrator | 2025-02-10 09:23:56 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:23:56.105957 | orchestrator | 2025-02-10 09:23:56 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:23:56.107553 | orchestrator | 2025-02-10 09:23:56 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:23:56.107603 | orchestrator | 2025-02-10 09:23:56 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:23:56.107629 | orchestrator | 2025-02-10 09:23:56.107645 | orchestrator | 2025-02-10 09:23:56.107659 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-02-10 09:23:56.107703 | orchestrator | 2025-02-10 09:23:56.107763 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-02-10 09:23:56.107779 | orchestrator | Monday 10 February 2025 09:19:23 +0000 (0:00:00.315) 0:00:00.315 ******* 2025-02-10 09:23:56.107793 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:23:56.107808 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:23:56.107822 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:23:56.107836 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:23:56.107849 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:23:56.107863 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:23:56.107877 | orchestrator | 2025-02-10 09:23:56.107891 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-02-10 09:23:56.107904 | orchestrator | Monday 10 February 2025 09:19:25 +0000 (0:00:01.506) 0:00:01.821 ******* 2025-02-10 09:23:56.107918 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:23:56.107932 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:23:56.107946 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:23:56.107960 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:56.107973 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:56.107987 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:56.108000 | orchestrator | 2025-02-10 09:23:56.108014 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-02-10 09:23:56.108028 | orchestrator | Monday 10 February 2025 09:19:27 +0000 (0:00:02.612) 0:00:04.434 ******* 2025-02-10 09:23:56.108042 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:23:56.108055 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:23:56.108069 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:23:56.108082 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:56.108096 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:56.108109 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:56.108122 | orchestrator | 2025-02-10 09:23:56.108137 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-02-10 09:23:56.108153 | orchestrator | Monday 10 February 2025 09:19:29 +0000 (0:00:01.649) 0:00:06.083 ******* 2025-02-10 09:23:56.108169 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:23:56.108184 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:23:56.108200 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:23:56.108215 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:23:56.108230 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:23:56.108245 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:23:56.108261 | orchestrator | 2025-02-10 09:23:56.108277 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-02-10 09:23:56.108292 | orchestrator | Monday 10 February 2025 09:19:35 +0000 (0:00:05.744) 0:00:11.828 ******* 2025-02-10 09:23:56.108308 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:23:56.108324 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:23:56.108339 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:23:56.108355 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:23:56.108370 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:23:56.108385 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:23:56.108401 | orchestrator | 2025-02-10 09:23:56.108417 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-02-10 09:23:56.108433 | orchestrator | Monday 10 February 2025 09:19:37 +0000 (0:00:02.490) 0:00:14.318 ******* 2025-02-10 09:23:56.108448 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:23:56.108463 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:23:56.108479 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:23:56.108512 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:23:56.108526 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:23:56.108540 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:23:56.108553 | orchestrator | 2025-02-10 09:23:56.108567 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-02-10 09:23:56.108581 | orchestrator | Monday 10 February 2025 09:19:40 +0000 (0:00:02.555) 0:00:16.874 ******* 2025-02-10 09:23:56.108604 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:23:56.108618 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:23:56.108635 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:23:56.108658 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:56.108680 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:56.108701 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:56.108877 | orchestrator | 2025-02-10 09:23:56.108907 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-02-10 09:23:56.108932 | orchestrator | Monday 10 February 2025 09:19:41 +0000 (0:00:00.889) 0:00:17.763 ******* 2025-02-10 09:23:56.108955 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:23:56.108979 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:23:56.109002 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:23:56.109026 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:56.109050 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:56.109075 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:56.109099 | orchestrator | 2025-02-10 09:23:56.109123 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-02-10 09:23:56.109148 | orchestrator | Monday 10 February 2025 09:19:42 +0000 (0:00:01.075) 0:00:18.839 ******* 2025-02-10 09:23:56.109171 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-02-10 09:23:56.109192 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-02-10 09:23:56.109216 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:23:56.109239 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-02-10 09:23:56.109269 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-02-10 09:23:56.109294 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:23:56.109319 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-02-10 09:23:56.109343 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-02-10 09:23:56.109367 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:23:56.109391 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-02-10 09:23:56.109433 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-02-10 09:23:56.109459 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:56.109483 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-02-10 09:23:56.109507 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-02-10 09:23:56.109531 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:56.109556 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-02-10 09:23:56.109579 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-02-10 09:23:56.109603 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:56.109627 | orchestrator | 2025-02-10 09:23:56.109651 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-02-10 09:23:56.109675 | orchestrator | Monday 10 February 2025 09:19:43 +0000 (0:00:00.872) 0:00:19.712 ******* 2025-02-10 09:23:56.109698 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:23:56.109762 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:23:56.109787 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:23:56.109811 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:56.109835 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:56.109858 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:56.109878 | orchestrator | 2025-02-10 09:23:56.109903 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-02-10 09:23:56.109929 | orchestrator | Monday 10 February 2025 09:19:44 +0000 (0:00:01.552) 0:00:21.264 ******* 2025-02-10 09:23:56.109967 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:23:56.109992 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:23:56.110080 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:23:56.110112 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:23:56.110135 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:23:56.110160 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:23:56.110186 | orchestrator | 2025-02-10 09:23:56.110209 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-02-10 09:23:56.110233 | orchestrator | Monday 10 February 2025 09:19:45 +0000 (0:00:00.933) 0:00:22.198 ******* 2025-02-10 09:23:56.110257 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:23:56.110281 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:23:56.110304 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:23:56.110329 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:23:56.110352 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:23:56.110375 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:23:56.110400 | orchestrator | 2025-02-10 09:23:56.110424 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-02-10 09:23:56.110449 | orchestrator | Monday 10 February 2025 09:19:51 +0000 (0:00:05.528) 0:00:27.726 ******* 2025-02-10 09:23:56.110472 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:23:56.110497 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:23:56.110521 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:23:56.110544 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:56.110568 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:56.110591 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:56.110615 | orchestrator | 2025-02-10 09:23:56.110639 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-02-10 09:23:56.110662 | orchestrator | Monday 10 February 2025 09:19:52 +0000 (0:00:01.093) 0:00:28.820 ******* 2025-02-10 09:23:56.110686 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:23:56.110779 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:23:56.110809 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:23:56.110835 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:56.110862 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:56.110888 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:56.110924 | orchestrator | 2025-02-10 09:23:56.110948 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-02-10 09:23:56.110971 | orchestrator | Monday 10 February 2025 09:19:53 +0000 (0:00:01.291) 0:00:30.112 ******* 2025-02-10 09:23:56.110992 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:23:56.111013 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:23:56.111036 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:23:56.111058 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:56.111081 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:56.111103 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:56.111126 | orchestrator | 2025-02-10 09:23:56.111149 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-02-10 09:23:56.111173 | orchestrator | Monday 10 February 2025 09:19:54 +0000 (0:00:00.749) 0:00:30.861 ******* 2025-02-10 09:23:56.111197 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-02-10 09:23:56.111221 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-02-10 09:23:56.111244 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:23:56.111267 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-02-10 09:23:56.111290 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-02-10 09:23:56.111314 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:23:56.111337 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-02-10 09:23:56.111388 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-02-10 09:23:56.111411 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:23:56.111431 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-02-10 09:23:56.111450 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-02-10 09:23:56.111485 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:56.111506 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-02-10 09:23:56.111527 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-02-10 09:23:56.111549 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:56.111570 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-02-10 09:23:56.111591 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-02-10 09:23:56.111613 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:56.111635 | orchestrator | 2025-02-10 09:23:56.111656 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-02-10 09:23:56.111690 | orchestrator | Monday 10 February 2025 09:19:55 +0000 (0:00:01.179) 0:00:32.040 ******* 2025-02-10 09:23:56.111733 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:23:56.111756 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:23:56.111777 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:23:56.111798 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:56.111819 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:56.111840 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:56.111861 | orchestrator | 2025-02-10 09:23:56.111883 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-02-10 09:23:56.111903 | orchestrator | 2025-02-10 09:23:56.111924 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-02-10 09:23:56.111945 | orchestrator | Monday 10 February 2025 09:19:57 +0000 (0:00:01.583) 0:00:33.623 ******* 2025-02-10 09:23:56.111966 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:23:56.111987 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:23:56.112007 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:23:56.112028 | orchestrator | 2025-02-10 09:23:56.112050 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-02-10 09:23:56.112071 | orchestrator | Monday 10 February 2025 09:19:58 +0000 (0:00:01.474) 0:00:35.098 ******* 2025-02-10 09:23:56.112092 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:23:56.112112 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:23:56.112134 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:23:56.112155 | orchestrator | 2025-02-10 09:23:56.112176 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-02-10 09:23:56.112197 | orchestrator | Monday 10 February 2025 09:19:59 +0000 (0:00:01.133) 0:00:36.231 ******* 2025-02-10 09:23:56.112217 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:23:56.112238 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:23:56.112260 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:23:56.112280 | orchestrator | 2025-02-10 09:23:56.112302 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-02-10 09:23:56.112331 | orchestrator | Monday 10 February 2025 09:20:01 +0000 (0:00:01.316) 0:00:37.548 ******* 2025-02-10 09:23:56.112353 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:23:56.112374 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:23:56.112394 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:23:56.112415 | orchestrator | 2025-02-10 09:23:56.112436 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-02-10 09:23:56.112457 | orchestrator | Monday 10 February 2025 09:20:01 +0000 (0:00:00.611) 0:00:38.160 ******* 2025-02-10 09:23:56.112479 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:56.112500 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:56.112521 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:56.112542 | orchestrator | 2025-02-10 09:23:56.112564 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-02-10 09:23:56.112584 | orchestrator | Monday 10 February 2025 09:20:01 +0000 (0:00:00.275) 0:00:38.435 ******* 2025-02-10 09:23:56.112606 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:23:56.112628 | orchestrator | 2025-02-10 09:23:56.112650 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-02-10 09:23:56.112683 | orchestrator | Monday 10 February 2025 09:20:02 +0000 (0:00:00.830) 0:00:39.265 ******* 2025-02-10 09:23:56.112704 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:23:56.112744 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:23:56.112765 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:23:56.112787 | orchestrator | 2025-02-10 09:23:56.112807 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-02-10 09:23:56.112827 | orchestrator | Monday 10 February 2025 09:20:03 +0000 (0:00:01.196) 0:00:40.462 ******* 2025-02-10 09:23:56.112847 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:56.112868 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:56.112888 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:23:56.112909 | orchestrator | 2025-02-10 09:23:56.112930 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-02-10 09:23:56.112951 | orchestrator | Monday 10 February 2025 09:20:04 +0000 (0:00:00.669) 0:00:41.131 ******* 2025-02-10 09:23:56.112971 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:56.112992 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:56.113012 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:23:56.113032 | orchestrator | 2025-02-10 09:23:56.113053 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-02-10 09:23:56.113074 | orchestrator | Monday 10 February 2025 09:20:05 +0000 (0:00:00.916) 0:00:42.048 ******* 2025-02-10 09:23:56.113095 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:56.113115 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:56.113136 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:23:56.113157 | orchestrator | 2025-02-10 09:23:56.113178 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-02-10 09:23:56.113200 | orchestrator | Monday 10 February 2025 09:20:07 +0000 (0:00:01.923) 0:00:43.971 ******* 2025-02-10 09:23:56.113220 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:56.113242 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:56.113263 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:56.113284 | orchestrator | 2025-02-10 09:23:56.113305 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-02-10 09:23:56.113326 | orchestrator | Monday 10 February 2025 09:20:07 +0000 (0:00:00.385) 0:00:44.356 ******* 2025-02-10 09:23:56.113348 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:56.113369 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:56.113391 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:56.113421 | orchestrator | 2025-02-10 09:23:56.113443 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-02-10 09:23:56.113464 | orchestrator | Monday 10 February 2025 09:20:08 +0000 (0:00:00.377) 0:00:44.734 ******* 2025-02-10 09:23:56.113485 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:23:56.113506 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:23:56.113528 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:23:56.113549 | orchestrator | 2025-02-10 09:23:56.113570 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-02-10 09:23:56.113592 | orchestrator | Monday 10 February 2025 09:20:09 +0000 (0:00:01.318) 0:00:46.053 ******* 2025-02-10 09:23:56.113625 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-02-10 09:23:56.113648 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-02-10 09:23:56.113669 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-02-10 09:23:56.113690 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-02-10 09:23:56.113774 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-02-10 09:23:56.113811 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-02-10 09:23:56.113833 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-02-10 09:23:56.113855 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-02-10 09:23:56.113873 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-02-10 09:23:56.113890 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-02-10 09:23:56.113914 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-02-10 09:23:56.113930 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-02-10 09:23:56.113947 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-02-10 09:23:56.113958 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-02-10 09:23:56.113968 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-02-10 09:23:56.113978 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:23:56.113988 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:23:56.113998 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:23:56.114008 | orchestrator | 2025-02-10 09:23:56.114046 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-02-10 09:23:56.114058 | orchestrator | Monday 10 February 2025 09:21:05 +0000 (0:00:56.223) 0:01:42.276 ******* 2025-02-10 09:23:56.114068 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:56.114078 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:56.114088 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:56.114097 | orchestrator | 2025-02-10 09:23:56.114107 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-02-10 09:23:56.114117 | orchestrator | Monday 10 February 2025 09:21:06 +0000 (0:00:00.357) 0:01:42.634 ******* 2025-02-10 09:23:56.114127 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:23:56.114137 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:23:56.114147 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:23:56.114157 | orchestrator | 2025-02-10 09:23:56.114167 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-02-10 09:23:56.114177 | orchestrator | Monday 10 February 2025 09:21:07 +0000 (0:00:01.179) 0:01:43.813 ******* 2025-02-10 09:23:56.114187 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:23:56.114197 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:23:56.114207 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:23:56.114217 | orchestrator | 2025-02-10 09:23:56.114227 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-02-10 09:23:56.114237 | orchestrator | Monday 10 February 2025 09:21:09 +0000 (0:00:01.760) 0:01:45.574 ******* 2025-02-10 09:23:56.114247 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:23:56.114257 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:23:56.114267 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:23:56.114277 | orchestrator | 2025-02-10 09:23:56.114287 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-02-10 09:23:56.114297 | orchestrator | Monday 10 February 2025 09:21:22 +0000 (0:00:13.833) 0:01:59.408 ******* 2025-02-10 09:23:56.114307 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:23:56.114324 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:23:56.114334 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:23:56.114344 | orchestrator | 2025-02-10 09:23:56.114354 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-02-10 09:23:56.114364 | orchestrator | Monday 10 February 2025 09:21:23 +0000 (0:00:01.027) 0:02:00.436 ******* 2025-02-10 09:23:56.114374 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:23:56.114384 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:23:56.114393 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:23:56.114403 | orchestrator | 2025-02-10 09:23:56.114417 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-02-10 09:23:56.114428 | orchestrator | Monday 10 February 2025 09:21:24 +0000 (0:00:00.869) 0:02:01.305 ******* 2025-02-10 09:23:56.114438 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:23:56.114448 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:23:56.114458 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:23:56.114468 | orchestrator | 2025-02-10 09:23:56.114486 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-02-10 09:23:56.114497 | orchestrator | Monday 10 February 2025 09:21:25 +0000 (0:00:00.732) 0:02:02.038 ******* 2025-02-10 09:23:56.114507 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:23:56.114517 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:23:56.114527 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:23:56.114537 | orchestrator | 2025-02-10 09:23:56.114547 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-02-10 09:23:56.114557 | orchestrator | Monday 10 February 2025 09:21:26 +0000 (0:00:01.189) 0:02:03.228 ******* 2025-02-10 09:23:56.114567 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:23:56.114577 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:23:56.114586 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:23:56.114596 | orchestrator | 2025-02-10 09:23:56.114606 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-02-10 09:23:56.114616 | orchestrator | Monday 10 February 2025 09:21:27 +0000 (0:00:00.401) 0:02:03.629 ******* 2025-02-10 09:23:56.114626 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:23:56.114636 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:23:56.114646 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:23:56.114656 | orchestrator | 2025-02-10 09:23:56.114666 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-02-10 09:23:56.114676 | orchestrator | Monday 10 February 2025 09:21:27 +0000 (0:00:00.853) 0:02:04.482 ******* 2025-02-10 09:23:56.114686 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:23:56.114695 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:23:56.114727 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:23:56.114742 | orchestrator | 2025-02-10 09:23:56.114752 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-02-10 09:23:56.114762 | orchestrator | Monday 10 February 2025 09:21:28 +0000 (0:00:00.869) 0:02:05.352 ******* 2025-02-10 09:23:56.114772 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:23:56.114782 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:23:56.114792 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:23:56.114802 | orchestrator | 2025-02-10 09:23:56.114812 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-02-10 09:23:56.114822 | orchestrator | Monday 10 February 2025 09:21:30 +0000 (0:00:01.371) 0:02:06.724 ******* 2025-02-10 09:23:56.114832 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:23:56.114841 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:23:56.114851 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:23:56.114861 | orchestrator | 2025-02-10 09:23:56.114871 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-02-10 09:23:56.114881 | orchestrator | Monday 10 February 2025 09:21:31 +0000 (0:00:00.836) 0:02:07.561 ******* 2025-02-10 09:23:56.114891 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:56.114901 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:56.114911 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:56.114926 | orchestrator | 2025-02-10 09:23:56.114936 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-02-10 09:23:56.114946 | orchestrator | Monday 10 February 2025 09:21:31 +0000 (0:00:00.294) 0:02:07.855 ******* 2025-02-10 09:23:56.114956 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:56.114966 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:56.114975 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:56.114986 | orchestrator | 2025-02-10 09:23:56.114996 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-02-10 09:23:56.115006 | orchestrator | Monday 10 February 2025 09:21:31 +0000 (0:00:00.292) 0:02:08.148 ******* 2025-02-10 09:23:56.115016 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:23:56.115042 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:23:56.115053 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:23:56.115063 | orchestrator | 2025-02-10 09:23:56.115073 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-02-10 09:23:56.115083 | orchestrator | Monday 10 February 2025 09:21:32 +0000 (0:00:00.704) 0:02:08.852 ******* 2025-02-10 09:23:56.115093 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:23:56.115103 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:23:56.115113 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:23:56.115123 | orchestrator | 2025-02-10 09:23:56.115133 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-02-10 09:23:56.115143 | orchestrator | Monday 10 February 2025 09:21:32 +0000 (0:00:00.639) 0:02:09.492 ******* 2025-02-10 09:23:56.115153 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-02-10 09:23:56.115163 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-02-10 09:23:56.115174 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-02-10 09:23:56.115184 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-02-10 09:23:56.115194 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-02-10 09:23:56.115205 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-02-10 09:23:56.115215 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-02-10 09:23:56.115229 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-02-10 09:23:56.115239 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-02-10 09:23:56.115249 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-02-10 09:23:56.115259 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-02-10 09:23:56.115269 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-02-10 09:23:56.115284 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-02-10 09:23:56.115294 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-02-10 09:23:56.115304 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-02-10 09:23:56.115314 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-02-10 09:23:56.115324 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-02-10 09:23:56.115334 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-02-10 09:23:56.115344 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-02-10 09:23:56.115354 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-02-10 09:23:56.115374 | orchestrator | 2025-02-10 09:23:56.115384 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-02-10 09:23:56.115394 | orchestrator | 2025-02-10 09:23:56.115404 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-02-10 09:23:56.115414 | orchestrator | Monday 10 February 2025 09:21:36 +0000 (0:00:03.095) 0:02:12.588 ******* 2025-02-10 09:23:56.115424 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:23:56.115434 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:23:56.115445 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:23:56.115454 | orchestrator | 2025-02-10 09:23:56.115465 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-02-10 09:23:56.115474 | orchestrator | Monday 10 February 2025 09:21:36 +0000 (0:00:00.505) 0:02:13.093 ******* 2025-02-10 09:23:56.115484 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:23:56.115494 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:23:56.115504 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:23:56.115514 | orchestrator | 2025-02-10 09:23:56.115527 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-02-10 09:23:56.115538 | orchestrator | Monday 10 February 2025 09:21:37 +0000 (0:00:00.750) 0:02:13.844 ******* 2025-02-10 09:23:56.115548 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:23:56.115558 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:23:56.115567 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:23:56.115577 | orchestrator | 2025-02-10 09:23:56.115587 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-02-10 09:23:56.115597 | orchestrator | Monday 10 February 2025 09:21:37 +0000 (0:00:00.472) 0:02:14.317 ******* 2025-02-10 09:23:56.115607 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:23:56.115617 | orchestrator | 2025-02-10 09:23:56.115627 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-02-10 09:23:56.115637 | orchestrator | Monday 10 February 2025 09:21:38 +0000 (0:00:00.798) 0:02:15.115 ******* 2025-02-10 09:23:56.115647 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:23:56.115657 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:23:56.115667 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:23:56.115677 | orchestrator | 2025-02-10 09:23:56.115687 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-02-10 09:23:56.115697 | orchestrator | Monday 10 February 2025 09:21:39 +0000 (0:00:00.418) 0:02:15.534 ******* 2025-02-10 09:23:56.115725 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:23:56.115736 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:23:56.115746 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:23:56.115756 | orchestrator | 2025-02-10 09:23:56.115766 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-02-10 09:23:56.115776 | orchestrator | Monday 10 February 2025 09:21:39 +0000 (0:00:00.419) 0:02:15.953 ******* 2025-02-10 09:23:56.115786 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:23:56.115796 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:23:56.115806 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:23:56.115815 | orchestrator | 2025-02-10 09:23:56.115825 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-02-10 09:23:56.115835 | orchestrator | Monday 10 February 2025 09:21:39 +0000 (0:00:00.458) 0:02:16.412 ******* 2025-02-10 09:23:56.115845 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:23:56.115855 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:23:56.115865 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:23:56.115875 | orchestrator | 2025-02-10 09:23:56.115885 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-02-10 09:23:56.115895 | orchestrator | Monday 10 February 2025 09:21:42 +0000 (0:00:02.268) 0:02:18.681 ******* 2025-02-10 09:23:56.115905 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:23:56.115915 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:23:56.115930 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:23:56.115940 | orchestrator | 2025-02-10 09:23:56.115950 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-02-10 09:23:56.115960 | orchestrator | 2025-02-10 09:23:56.115970 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-02-10 09:23:56.115979 | orchestrator | Monday 10 February 2025 09:21:50 +0000 (0:00:08.580) 0:02:27.261 ******* 2025-02-10 09:23:56.115989 | orchestrator | ok: [testbed-manager] 2025-02-10 09:23:56.115999 | orchestrator | 2025-02-10 09:23:56.116009 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-02-10 09:23:56.116019 | orchestrator | Monday 10 February 2025 09:21:51 +0000 (0:00:00.627) 0:02:27.889 ******* 2025-02-10 09:23:56.116029 | orchestrator | changed: [testbed-manager] 2025-02-10 09:23:56.116039 | orchestrator | 2025-02-10 09:23:56.116049 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-02-10 09:23:56.116059 | orchestrator | Monday 10 February 2025 09:21:51 +0000 (0:00:00.537) 0:02:28.427 ******* 2025-02-10 09:23:56.116069 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-02-10 09:23:56.116079 | orchestrator | 2025-02-10 09:23:56.116095 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-02-10 09:23:56.116105 | orchestrator | Monday 10 February 2025 09:21:52 +0000 (0:00:00.757) 0:02:29.185 ******* 2025-02-10 09:23:56.116115 | orchestrator | changed: [testbed-manager] 2025-02-10 09:23:56.116125 | orchestrator | 2025-02-10 09:23:56.116135 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-02-10 09:23:56.116144 | orchestrator | Monday 10 February 2025 09:21:53 +0000 (0:00:00.917) 0:02:30.102 ******* 2025-02-10 09:23:56.116154 | orchestrator | changed: [testbed-manager] 2025-02-10 09:23:56.116164 | orchestrator | 2025-02-10 09:23:56.116174 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-02-10 09:23:56.116184 | orchestrator | Monday 10 February 2025 09:21:54 +0000 (0:00:00.735) 0:02:30.838 ******* 2025-02-10 09:23:56.116194 | orchestrator | changed: [testbed-manager -> localhost] 2025-02-10 09:23:56.116204 | orchestrator | 2025-02-10 09:23:56.116214 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-02-10 09:23:56.116224 | orchestrator | Monday 10 February 2025 09:21:55 +0000 (0:00:00.864) 0:02:31.702 ******* 2025-02-10 09:23:56.116234 | orchestrator | changed: [testbed-manager -> localhost] 2025-02-10 09:23:56.116244 | orchestrator | 2025-02-10 09:23:56.116254 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-02-10 09:23:56.116264 | orchestrator | Monday 10 February 2025 09:21:55 +0000 (0:00:00.660) 0:02:32.363 ******* 2025-02-10 09:23:56.116274 | orchestrator | changed: [testbed-manager] 2025-02-10 09:23:56.116284 | orchestrator | 2025-02-10 09:23:56.116294 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-02-10 09:23:56.116307 | orchestrator | Monday 10 February 2025 09:21:56 +0000 (0:00:00.523) 0:02:32.887 ******* 2025-02-10 09:23:56.116318 | orchestrator | changed: [testbed-manager] 2025-02-10 09:23:56.116327 | orchestrator | 2025-02-10 09:23:56.116337 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-02-10 09:23:56.116347 | orchestrator | 2025-02-10 09:23:56.116357 | orchestrator | TASK [osism.commons.kubectl : Gather variables for each operating system] ****** 2025-02-10 09:23:56.116367 | orchestrator | Monday 10 February 2025 09:21:57 +0000 (0:00:00.739) 0:02:33.626 ******* 2025-02-10 09:23:56.116377 | orchestrator | ok: [testbed-manager] 2025-02-10 09:23:56.116387 | orchestrator | 2025-02-10 09:23:56.116397 | orchestrator | TASK [osism.commons.kubectl : Include distribution specific install tasks] ***** 2025-02-10 09:23:56.116407 | orchestrator | Monday 10 February 2025 09:21:57 +0000 (0:00:00.186) 0:02:33.813 ******* 2025-02-10 09:23:56.116417 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-02-10 09:23:56.116429 | orchestrator | 2025-02-10 09:23:56.116439 | orchestrator | TASK [osism.commons.kubectl : Remove old architecture-dependent repository] **** 2025-02-10 09:23:56.116454 | orchestrator | Monday 10 February 2025 09:21:57 +0000 (0:00:00.623) 0:02:34.436 ******* 2025-02-10 09:23:56.116464 | orchestrator | ok: [testbed-manager] 2025-02-10 09:23:56.116474 | orchestrator | 2025-02-10 09:23:56.116484 | orchestrator | TASK [osism.commons.kubectl : Install apt-transport-https package] ************* 2025-02-10 09:23:56.116493 | orchestrator | Monday 10 February 2025 09:21:59 +0000 (0:00:01.316) 0:02:35.753 ******* 2025-02-10 09:23:56.116503 | orchestrator | ok: [testbed-manager] 2025-02-10 09:23:56.116513 | orchestrator | 2025-02-10 09:23:56.116523 | orchestrator | TASK [osism.commons.kubectl : Add repository gpg key] ************************** 2025-02-10 09:23:56.116533 | orchestrator | Monday 10 February 2025 09:22:01 +0000 (0:00:02.368) 0:02:38.121 ******* 2025-02-10 09:23:56.116543 | orchestrator | changed: [testbed-manager] 2025-02-10 09:23:56.116553 | orchestrator | 2025-02-10 09:23:56.116562 | orchestrator | TASK [osism.commons.kubectl : Set permissions of gpg key] ********************** 2025-02-10 09:23:56.116572 | orchestrator | Monday 10 February 2025 09:22:02 +0000 (0:00:01.028) 0:02:39.149 ******* 2025-02-10 09:23:56.116582 | orchestrator | ok: [testbed-manager] 2025-02-10 09:23:56.116592 | orchestrator | 2025-02-10 09:23:56.116602 | orchestrator | TASK [osism.commons.kubectl : Add repository Debian] *************************** 2025-02-10 09:23:56.116612 | orchestrator | Monday 10 February 2025 09:22:03 +0000 (0:00:00.556) 0:02:39.705 ******* 2025-02-10 09:23:56.116622 | orchestrator | changed: [testbed-manager] 2025-02-10 09:23:56.116632 | orchestrator | 2025-02-10 09:23:56.116642 | orchestrator | TASK [osism.commons.kubectl : Install required packages] *********************** 2025-02-10 09:23:56.116651 | orchestrator | Monday 10 February 2025 09:22:11 +0000 (0:00:08.695) 0:02:48.401 ******* 2025-02-10 09:23:56.116661 | orchestrator | changed: [testbed-manager] 2025-02-10 09:23:56.116671 | orchestrator | 2025-02-10 09:23:56.116681 | orchestrator | TASK [osism.commons.kubectl : Remove kubectl symlink] ************************** 2025-02-10 09:23:56.116691 | orchestrator | Monday 10 February 2025 09:22:25 +0000 (0:00:13.723) 0:03:02.124 ******* 2025-02-10 09:23:56.116701 | orchestrator | ok: [testbed-manager] 2025-02-10 09:23:56.116756 | orchestrator | 2025-02-10 09:23:56.116767 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-02-10 09:23:56.116777 | orchestrator | 2025-02-10 09:23:56.116787 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-02-10 09:23:56.116798 | orchestrator | Monday 10 February 2025 09:22:26 +0000 (0:00:00.708) 0:03:02.833 ******* 2025-02-10 09:23:56.116808 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:23:56.116818 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:23:56.116827 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:23:56.116837 | orchestrator | 2025-02-10 09:23:56.116847 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-02-10 09:23:56.116857 | orchestrator | Monday 10 February 2025 09:22:26 +0000 (0:00:00.569) 0:03:03.402 ******* 2025-02-10 09:23:56.116867 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:56.116877 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:56.116887 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:56.116897 | orchestrator | 2025-02-10 09:23:56.116907 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-02-10 09:23:56.116917 | orchestrator | Monday 10 February 2025 09:22:27 +0000 (0:00:00.398) 0:03:03.800 ******* 2025-02-10 09:23:56.116927 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:23:56.116937 | orchestrator | 2025-02-10 09:23:56.116953 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-02-10 09:23:56.116964 | orchestrator | Monday 10 February 2025 09:22:27 +0000 (0:00:00.637) 0:03:04.437 ******* 2025-02-10 09:23:56.116974 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-02-10 09:23:56.116984 | orchestrator | 2025-02-10 09:23:56.116994 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-02-10 09:23:56.117004 | orchestrator | Monday 10 February 2025 09:22:28 +0000 (0:00:00.565) 0:03:05.003 ******* 2025-02-10 09:23:56.117019 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-10 09:23:56.117033 | orchestrator | 2025-02-10 09:23:56.117043 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-02-10 09:23:56.117056 | orchestrator | Monday 10 February 2025 09:22:29 +0000 (0:00:00.915) 0:03:05.919 ******* 2025-02-10 09:23:56.117066 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:56.117076 | orchestrator | 2025-02-10 09:23:56.117086 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-02-10 09:23:56.117096 | orchestrator | Monday 10 February 2025 09:22:29 +0000 (0:00:00.208) 0:03:06.128 ******* 2025-02-10 09:23:56.117106 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-10 09:23:56.117116 | orchestrator | 2025-02-10 09:23:56.117126 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-02-10 09:23:56.117136 | orchestrator | Monday 10 February 2025 09:22:30 +0000 (0:00:01.023) 0:03:07.152 ******* 2025-02-10 09:23:56.117146 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:56.117156 | orchestrator | 2025-02-10 09:23:56.117166 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-02-10 09:23:56.117176 | orchestrator | Monday 10 February 2025 09:22:30 +0000 (0:00:00.180) 0:03:07.332 ******* 2025-02-10 09:23:56.117186 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:56.117196 | orchestrator | 2025-02-10 09:23:56.117206 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-02-10 09:23:56.117216 | orchestrator | Monday 10 February 2025 09:22:30 +0000 (0:00:00.191) 0:03:07.524 ******* 2025-02-10 09:23:56.117226 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:56.117236 | orchestrator | 2025-02-10 09:23:56.117246 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-02-10 09:23:56.117256 | orchestrator | Monday 10 February 2025 09:22:31 +0000 (0:00:00.227) 0:03:07.752 ******* 2025-02-10 09:23:56.117266 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:56.117276 | orchestrator | 2025-02-10 09:23:56.117284 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-02-10 09:23:56.117292 | orchestrator | Monday 10 February 2025 09:22:31 +0000 (0:00:00.289) 0:03:08.042 ******* 2025-02-10 09:23:56.117301 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-02-10 09:23:56.117309 | orchestrator | 2025-02-10 09:23:56.117318 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-02-10 09:23:56.117326 | orchestrator | Monday 10 February 2025 09:22:41 +0000 (0:00:10.097) 0:03:18.139 ******* 2025-02-10 09:23:56.117334 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-02-10 09:23:56.117343 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-02-10 09:23:56.117351 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-02-10 09:23:56.117360 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-02-10 09:23:56.117369 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-02-10 09:23:56.117377 | orchestrator | 2025-02-10 09:23:56.117386 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-02-10 09:23:56.117394 | orchestrator | Monday 10 February 2025 09:23:23 +0000 (0:00:41.836) 0:03:59.976 ******* 2025-02-10 09:23:56.117403 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-10 09:23:56.117412 | orchestrator | 2025-02-10 09:23:56.117420 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-02-10 09:23:56.117429 | orchestrator | Monday 10 February 2025 09:23:25 +0000 (0:00:01.697) 0:04:01.673 ******* 2025-02-10 09:23:56.117437 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-02-10 09:23:56.117446 | orchestrator | 2025-02-10 09:23:56.117454 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-02-10 09:23:56.117487 | orchestrator | Monday 10 February 2025 09:23:26 +0000 (0:00:01.202) 0:04:02.875 ******* 2025-02-10 09:23:56.117496 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-02-10 09:23:56.117509 | orchestrator | 2025-02-10 09:23:56.117518 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-02-10 09:23:56.117527 | orchestrator | Monday 10 February 2025 09:23:27 +0000 (0:00:01.091) 0:04:03.967 ******* 2025-02-10 09:23:56.117536 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:56.117544 | orchestrator | 2025-02-10 09:23:56.117553 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-02-10 09:23:56.117561 | orchestrator | Monday 10 February 2025 09:23:28 +0000 (0:00:01.075) 0:04:05.043 ******* 2025-02-10 09:23:56.117570 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2025-02-10 09:23:56.117578 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2025-02-10 09:23:56.117587 | orchestrator | 2025-02-10 09:23:56.117595 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-02-10 09:23:56.117604 | orchestrator | Monday 10 February 2025 09:23:31 +0000 (0:00:02.678) 0:04:07.722 ******* 2025-02-10 09:23:56.117612 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:56.117621 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:56.117629 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:56.117638 | orchestrator | 2025-02-10 09:23:56.117646 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-02-10 09:23:56.117655 | orchestrator | Monday 10 February 2025 09:23:31 +0000 (0:00:00.574) 0:04:08.296 ******* 2025-02-10 09:23:56.117663 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:23:56.117672 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:23:56.117684 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:23:56.117694 | orchestrator | 2025-02-10 09:23:56.117702 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-02-10 09:23:56.117728 | orchestrator | 2025-02-10 09:23:56.117736 | orchestrator | TASK [osism.commons.k9s : Gather variables for each operating system] ********** 2025-02-10 09:23:56.117749 | orchestrator | Monday 10 February 2025 09:23:33 +0000 (0:00:01.774) 0:04:10.071 ******* 2025-02-10 09:23:56.117758 | orchestrator | ok: [testbed-manager] 2025-02-10 09:23:56.117766 | orchestrator | 2025-02-10 09:23:56.117775 | orchestrator | TASK [osism.commons.k9s : Include distribution specific install tasks] ********* 2025-02-10 09:23:56.117783 | orchestrator | Monday 10 February 2025 09:23:33 +0000 (0:00:00.434) 0:04:10.505 ******* 2025-02-10 09:23:56.117792 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-02-10 09:23:56.117801 | orchestrator | 2025-02-10 09:23:56.117809 | orchestrator | TASK [osism.commons.k9s : Install k9s packages] ******************************** 2025-02-10 09:23:56.117818 | orchestrator | Monday 10 February 2025 09:23:34 +0000 (0:00:00.304) 0:04:10.810 ******* 2025-02-10 09:23:56.117826 | orchestrator | changed: [testbed-manager] 2025-02-10 09:23:56.117835 | orchestrator | 2025-02-10 09:23:56.117843 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-02-10 09:23:56.117852 | orchestrator | 2025-02-10 09:23:56.117860 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-02-10 09:23:56.117869 | orchestrator | Monday 10 February 2025 09:23:40 +0000 (0:00:06.327) 0:04:17.137 ******* 2025-02-10 09:23:56.117877 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:23:56.117886 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:23:56.117924 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:23:56.117933 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:23:56.117942 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:23:56.117950 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:23:56.117958 | orchestrator | 2025-02-10 09:23:56.117967 | orchestrator | TASK [Manage labels] *********************************************************** 2025-02-10 09:23:56.117976 | orchestrator | Monday 10 February 2025 09:23:41 +0000 (0:00:00.840) 0:04:17.978 ******* 2025-02-10 09:23:56.117984 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-02-10 09:23:56.117998 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-02-10 09:23:56.118006 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-02-10 09:23:56.118277 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-02-10 09:23:56.118296 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-02-10 09:23:56.118305 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-02-10 09:23:56.118313 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-02-10 09:23:56.118322 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-02-10 09:23:56.118330 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-02-10 09:23:56.118339 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-02-10 09:23:56.118347 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-02-10 09:23:56.118356 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-02-10 09:23:56.118364 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-02-10 09:23:56.118372 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-02-10 09:23:56.118381 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-02-10 09:23:56.118389 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-02-10 09:23:56.118398 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-02-10 09:23:56.118406 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-02-10 09:23:56.118414 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-02-10 09:23:56.118423 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-02-10 09:23:56.118431 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-02-10 09:23:56.118439 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-02-10 09:23:56.118453 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-02-10 09:23:56.118462 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-02-10 09:23:56.118471 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-02-10 09:23:56.118479 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-02-10 09:23:56.118487 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-02-10 09:23:56.118496 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-02-10 09:23:56.118504 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-02-10 09:23:56.118521 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-02-10 09:23:56.118530 | orchestrator | 2025-02-10 09:23:56.118538 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-02-10 09:23:56.118547 | orchestrator | Monday 10 February 2025 09:23:53 +0000 (0:00:12.024) 0:04:30.003 ******* 2025-02-10 09:23:56.118555 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:23:56.118564 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:23:56.118572 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:23:56.118580 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:56.118589 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:56.118597 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:56.118613 | orchestrator | 2025-02-10 09:23:56.118621 | orchestrator | TASK [Manage taints] *********************************************************** 2025-02-10 09:23:56.118630 | orchestrator | Monday 10 February 2025 09:23:54 +0000 (0:00:00.633) 0:04:30.636 ******* 2025-02-10 09:23:56.118638 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:23:56.118646 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:23:56.118655 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:23:56.118663 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:56.118671 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:56.118679 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:56.118687 | orchestrator | 2025-02-10 09:23:56.118696 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:23:56.118751 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:23:56.118768 | orchestrator | testbed-node-0 : ok=46  changed=21  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-02-10 09:23:56.118780 | orchestrator | testbed-node-1 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-02-10 09:23:56.118816 | orchestrator | testbed-node-2 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-02-10 09:23:56.118825 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-02-10 09:23:56.118834 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-02-10 09:23:56.118843 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-02-10 09:23:56.118851 | orchestrator | 2025-02-10 09:23:56.118859 | orchestrator | Monday 10 February 2025 09:23:54 +0000 (0:00:00.707) 0:04:31.343 ******* 2025-02-10 09:23:56.118868 | orchestrator | =============================================================================== 2025-02-10 09:23:56.118876 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 56.22s 2025-02-10 09:23:56.118885 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 41.84s 2025-02-10 09:23:56.118894 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 13.83s 2025-02-10 09:23:56.118902 | orchestrator | osism.commons.kubectl : Install required packages ---------------------- 13.72s 2025-02-10 09:23:56.118911 | orchestrator | Manage labels ---------------------------------------------------------- 12.02s 2025-02-10 09:23:56.118920 | orchestrator | k3s_server_post : Install Cilium --------------------------------------- 10.10s 2025-02-10 09:23:56.118928 | orchestrator | osism.commons.kubectl : Add repository Debian --------------------------- 8.70s 2025-02-10 09:23:56.118936 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.58s 2025-02-10 09:23:56.118945 | orchestrator | osism.commons.k9s : Install k9s packages -------------------------------- 6.33s 2025-02-10 09:23:56.118953 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 5.74s 2025-02-10 09:23:56.118961 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.53s 2025-02-10 09:23:56.118970 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.10s 2025-02-10 09:23:56.118978 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.68s 2025-02-10 09:23:56.118987 | orchestrator | k3s_prereq : Set same timezone on every Server -------------------------- 2.61s 2025-02-10 09:23:56.118995 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 2.56s 2025-02-10 09:23:56.119013 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 2.49s 2025-02-10 09:23:56.119022 | orchestrator | osism.commons.kubectl : Install apt-transport-https package ------------- 2.37s 2025-02-10 09:23:56.119030 | orchestrator | k3s_agent : Configure the k3s service ----------------------------------- 2.27s 2025-02-10 09:23:56.119038 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.92s 2025-02-10 09:23:56.119052 | orchestrator | k3s_server_post : Remove tmp directory used for manifests --------------- 1.77s 2025-02-10 09:23:56.119060 | orchestrator | 2025-02-10 09:23:56 | INFO  | Task 0ab1a19b-6d24-4f06-9a59-da7e131c67e2 is in state SUCCESS 2025-02-10 09:23:56.119073 | orchestrator | 2025-02-10 09:23:56 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:59.150308 | orchestrator | 2025-02-10 09:23:59 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:23:59.151360 | orchestrator | 2025-02-10 09:23:59 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:23:59.155950 | orchestrator | 2025-02-10 09:23:59 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:23:59.158938 | orchestrator | 2025-02-10 09:23:59 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:23:59.159778 | orchestrator | 2025-02-10 09:23:59 | INFO  | Task 5f58805b-7c3b-4007-8e07-0881196f8e1e is in state STARTED 2025-02-10 09:23:59.161839 | orchestrator | 2025-02-10 09:23:59 | INFO  | Task 56eccde7-b5d8-4d06-a02c-ee3f992ef6c7 is in state STARTED 2025-02-10 09:24:02.215799 | orchestrator | 2025-02-10 09:23:59 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:24:02.215953 | orchestrator | 2025-02-10 09:24:02 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:24:02.221666 | orchestrator | 2025-02-10 09:24:02 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:24:02.227630 | orchestrator | 2025-02-10 09:24:02 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:24:02.230535 | orchestrator | 2025-02-10 09:24:02 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:24:02.230572 | orchestrator | 2025-02-10 09:24:02 | INFO  | Task 643b3ff1-80f1-40f9-a554-6bb4f08c72e0 is in state STARTED 2025-02-10 09:24:02.230585 | orchestrator | 2025-02-10 09:24:02 | INFO  | Task 5f58805b-7c3b-4007-8e07-0881196f8e1e is in state STARTED 2025-02-10 09:24:02.230605 | orchestrator | 2025-02-10 09:24:02 | INFO  | Task 56eccde7-b5d8-4d06-a02c-ee3f992ef6c7 is in state STARTED 2025-02-10 09:24:05.283266 | orchestrator | 2025-02-10 09:24:02 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:24:05.283474 | orchestrator | 2025-02-10 09:24:05 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:24:05.287515 | orchestrator | 2025-02-10 09:24:05 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:24:05.287630 | orchestrator | 2025-02-10 09:24:05 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:24:05.292250 | orchestrator | 2025-02-10 09:24:05 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:24:05.292889 | orchestrator | 2025-02-10 09:24:05 | INFO  | Task 643b3ff1-80f1-40f9-a554-6bb4f08c72e0 is in state STARTED 2025-02-10 09:24:05.295384 | orchestrator | 2025-02-10 09:24:05 | INFO  | Task 5f58805b-7c3b-4007-8e07-0881196f8e1e is in state STARTED 2025-02-10 09:24:05.297893 | orchestrator | 2025-02-10 09:24:05 | INFO  | Task 56eccde7-b5d8-4d06-a02c-ee3f992ef6c7 is in state STARTED 2025-02-10 09:24:08.345978 | orchestrator | 2025-02-10 09:24:05 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:24:08.346156 | orchestrator | 2025-02-10 09:24:08 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:24:08.346874 | orchestrator | 2025-02-10 09:24:08 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:24:08.348927 | orchestrator | 2025-02-10 09:24:08 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:24:08.351501 | orchestrator | 2025-02-10 09:24:08 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:24:08.353687 | orchestrator | 2025-02-10 09:24:08 | INFO  | Task 643b3ff1-80f1-40f9-a554-6bb4f08c72e0 is in state STARTED 2025-02-10 09:24:08.355680 | orchestrator | 2025-02-10 09:24:08 | INFO  | Task 5f58805b-7c3b-4007-8e07-0881196f8e1e is in state SUCCESS 2025-02-10 09:24:08.357158 | orchestrator | 2025-02-10 09:24:08 | INFO  | Task 56eccde7-b5d8-4d06-a02c-ee3f992ef6c7 is in state STARTED 2025-02-10 09:24:11.410593 | orchestrator | 2025-02-10 09:24:08 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:24:11.410812 | orchestrator | 2025-02-10 09:24:11 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:24:11.413161 | orchestrator | 2025-02-10 09:24:11 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:24:11.417274 | orchestrator | 2025-02-10 09:24:11 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:24:11.417378 | orchestrator | 2025-02-10 09:24:11 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:24:11.421645 | orchestrator | 2025-02-10 09:24:11 | INFO  | Task 643b3ff1-80f1-40f9-a554-6bb4f08c72e0 is in state STARTED 2025-02-10 09:24:11.422268 | orchestrator | 2025-02-10 09:24:11 | INFO  | Task 56eccde7-b5d8-4d06-a02c-ee3f992ef6c7 is in state SUCCESS 2025-02-10 09:24:11.422375 | orchestrator | 2025-02-10 09:24:11 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:24:14.471285 | orchestrator | 2025-02-10 09:24:14 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:24:14.471683 | orchestrator | 2025-02-10 09:24:14 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:24:14.471768 | orchestrator | 2025-02-10 09:24:14 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:24:14.474390 | orchestrator | 2025-02-10 09:24:14 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:24:17.524245 | orchestrator | 2025-02-10 09:24:14 | INFO  | Task 643b3ff1-80f1-40f9-a554-6bb4f08c72e0 is in state STARTED 2025-02-10 09:24:17.524390 | orchestrator | 2025-02-10 09:24:14 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:24:17.524433 | orchestrator | 2025-02-10 09:24:17 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:24:17.525314 | orchestrator | 2025-02-10 09:24:17 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state STARTED 2025-02-10 09:24:17.525454 | orchestrator | 2025-02-10 09:24:17 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:24:17.525500 | orchestrator | 2025-02-10 09:24:17 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:24:20.570013 | orchestrator | 2025-02-10 09:24:17 | INFO  | Task 643b3ff1-80f1-40f9-a554-6bb4f08c72e0 is in state SUCCESS 2025-02-10 09:24:20.570289 | orchestrator | 2025-02-10 09:24:17 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:24:20.570334 | orchestrator | 2025-02-10 09:24:20 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:24:20.570381 | orchestrator | 2025-02-10 09:24:20 | INFO  | Task cf661d87-95a4-4b94-8940-915842fea4ae is in state SUCCESS 2025-02-10 09:24:20.570397 | orchestrator | 2025-02-10 09:24:20.570412 | orchestrator | 2025-02-10 09:24:20.570427 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-02-10 09:24:20.570441 | orchestrator | 2025-02-10 09:24:20.570456 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-02-10 09:24:20.570470 | orchestrator | Monday 10 February 2025 09:24:01 +0000 (0:00:00.368) 0:00:00.368 ******* 2025-02-10 09:24:20.570485 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-02-10 09:24:20.570500 | orchestrator | 2025-02-10 09:24:20.570514 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-02-10 09:24:20.570528 | orchestrator | Monday 10 February 2025 09:24:03 +0000 (0:00:01.390) 0:00:01.758 ******* 2025-02-10 09:24:20.570543 | orchestrator | changed: [testbed-manager] 2025-02-10 09:24:20.570558 | orchestrator | 2025-02-10 09:24:20.570572 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-02-10 09:24:20.570586 | orchestrator | Monday 10 February 2025 09:24:04 +0000 (0:00:01.938) 0:00:03.697 ******* 2025-02-10 09:24:20.570601 | orchestrator | changed: [testbed-manager] 2025-02-10 09:24:20.570616 | orchestrator | 2025-02-10 09:24:20.570630 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:24:20.570644 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:24:20.570660 | orchestrator | 2025-02-10 09:24:20.570675 | orchestrator | Monday 10 February 2025 09:24:05 +0000 (0:00:00.564) 0:00:04.261 ******* 2025-02-10 09:24:20.570689 | orchestrator | =============================================================================== 2025-02-10 09:24:20.570702 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.94s 2025-02-10 09:24:20.570716 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.39s 2025-02-10 09:24:20.570767 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.56s 2025-02-10 09:24:20.570781 | orchestrator | 2025-02-10 09:24:20.570797 | orchestrator | 2025-02-10 09:24:20.570812 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-02-10 09:24:20.570827 | orchestrator | 2025-02-10 09:24:20.570842 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-02-10 09:24:20.570857 | orchestrator | Monday 10 February 2025 09:24:00 +0000 (0:00:00.434) 0:00:00.434 ******* 2025-02-10 09:24:20.570873 | orchestrator | ok: [testbed-manager] 2025-02-10 09:24:20.570889 | orchestrator | 2025-02-10 09:24:20.570905 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-02-10 09:24:20.570920 | orchestrator | Monday 10 February 2025 09:24:02 +0000 (0:00:01.165) 0:00:01.600 ******* 2025-02-10 09:24:20.570936 | orchestrator | ok: [testbed-manager] 2025-02-10 09:24:20.570952 | orchestrator | 2025-02-10 09:24:20.570967 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-02-10 09:24:20.570981 | orchestrator | Monday 10 February 2025 09:24:03 +0000 (0:00:01.054) 0:00:02.654 ******* 2025-02-10 09:24:20.570995 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-02-10 09:24:20.571009 | orchestrator | 2025-02-10 09:24:20.571023 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-02-10 09:24:20.571036 | orchestrator | Monday 10 February 2025 09:24:04 +0000 (0:00:01.013) 0:00:03.668 ******* 2025-02-10 09:24:20.571050 | orchestrator | changed: [testbed-manager] 2025-02-10 09:24:20.571065 | orchestrator | 2025-02-10 09:24:20.571078 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-02-10 09:24:20.571092 | orchestrator | Monday 10 February 2025 09:24:06 +0000 (0:00:02.078) 0:00:05.746 ******* 2025-02-10 09:24:20.571106 | orchestrator | changed: [testbed-manager] 2025-02-10 09:24:20.571120 | orchestrator | 2025-02-10 09:24:20.571143 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-02-10 09:24:20.571157 | orchestrator | Monday 10 February 2025 09:24:06 +0000 (0:00:00.633) 0:00:06.380 ******* 2025-02-10 09:24:20.571171 | orchestrator | changed: [testbed-manager -> localhost] 2025-02-10 09:24:20.571185 | orchestrator | 2025-02-10 09:24:20.571199 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-02-10 09:24:20.571212 | orchestrator | Monday 10 February 2025 09:24:08 +0000 (0:00:01.263) 0:00:07.643 ******* 2025-02-10 09:24:20.571226 | orchestrator | changed: [testbed-manager -> localhost] 2025-02-10 09:24:20.571240 | orchestrator | 2025-02-10 09:24:20.571254 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-02-10 09:24:20.571268 | orchestrator | Monday 10 February 2025 09:24:08 +0000 (0:00:00.627) 0:00:08.271 ******* 2025-02-10 09:24:20.571281 | orchestrator | ok: [testbed-manager] 2025-02-10 09:24:20.571295 | orchestrator | 2025-02-10 09:24:20.571327 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-02-10 09:24:20.571342 | orchestrator | Monday 10 February 2025 09:24:09 +0000 (0:00:00.554) 0:00:08.825 ******* 2025-02-10 09:24:20.571355 | orchestrator | ok: [testbed-manager] 2025-02-10 09:24:20.571369 | orchestrator | 2025-02-10 09:24:20.571383 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:24:20.571397 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:24:20.571411 | orchestrator | 2025-02-10 09:24:20.571425 | orchestrator | Monday 10 February 2025 09:24:09 +0000 (0:00:00.399) 0:00:09.224 ******* 2025-02-10 09:24:20.571439 | orchestrator | =============================================================================== 2025-02-10 09:24:20.571453 | orchestrator | Write kubeconfig file --------------------------------------------------- 2.08s 2025-02-10 09:24:20.571467 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.26s 2025-02-10 09:24:20.571492 | orchestrator | Get home directory of operator user ------------------------------------- 1.17s 2025-02-10 09:24:20.572479 | orchestrator | Create .kube directory -------------------------------------------------- 1.05s 2025-02-10 09:24:20.572517 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.01s 2025-02-10 09:24:20.572530 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.63s 2025-02-10 09:24:20.572542 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.63s 2025-02-10 09:24:20.572555 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.55s 2025-02-10 09:24:20.572567 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.40s 2025-02-10 09:24:20.572580 | orchestrator | 2025-02-10 09:24:20.572592 | orchestrator | None 2025-02-10 09:24:20.572612 | orchestrator | 2025-02-10 09:24:20.572626 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-02-10 09:24:20.572638 | orchestrator | 2025-02-10 09:24:20.572651 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-02-10 09:24:20.572663 | orchestrator | Monday 10 February 2025 09:21:32 +0000 (0:00:00.130) 0:00:00.130 ******* 2025-02-10 09:24:20.572676 | orchestrator | ok: [localhost] => { 2025-02-10 09:24:20.572689 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-02-10 09:24:20.572702 | orchestrator | } 2025-02-10 09:24:20.572715 | orchestrator | 2025-02-10 09:24:20.572800 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-02-10 09:24:20.572814 | orchestrator | Monday 10 February 2025 09:21:32 +0000 (0:00:00.034) 0:00:00.165 ******* 2025-02-10 09:24:20.572827 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-02-10 09:24:20.572841 | orchestrator | ...ignoring 2025-02-10 09:24:20.572854 | orchestrator | 2025-02-10 09:24:20.572866 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-02-10 09:24:20.572892 | orchestrator | Monday 10 February 2025 09:21:35 +0000 (0:00:03.241) 0:00:03.407 ******* 2025-02-10 09:24:20.572905 | orchestrator | skipping: [localhost] 2025-02-10 09:24:20.572917 | orchestrator | 2025-02-10 09:24:20.572930 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-02-10 09:24:20.572942 | orchestrator | Monday 10 February 2025 09:21:35 +0000 (0:00:00.096) 0:00:03.503 ******* 2025-02-10 09:24:20.572954 | orchestrator | ok: [localhost] 2025-02-10 09:24:20.572966 | orchestrator | 2025-02-10 09:24:20.572979 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:24:20.572991 | orchestrator | 2025-02-10 09:24:20.573003 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:24:20.573015 | orchestrator | Monday 10 February 2025 09:21:35 +0000 (0:00:00.420) 0:00:03.923 ******* 2025-02-10 09:24:20.573027 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:24:20.573039 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:24:20.573052 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:24:20.573064 | orchestrator | 2025-02-10 09:24:20.573077 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:24:20.573092 | orchestrator | Monday 10 February 2025 09:21:36 +0000 (0:00:00.584) 0:00:04.508 ******* 2025-02-10 09:24:20.573106 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-02-10 09:24:20.573120 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-02-10 09:24:20.573134 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-02-10 09:24:20.573147 | orchestrator | 2025-02-10 09:24:20.573161 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-02-10 09:24:20.573175 | orchestrator | 2025-02-10 09:24:20.573189 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-02-10 09:24:20.573203 | orchestrator | Monday 10 February 2025 09:21:37 +0000 (0:00:00.751) 0:00:05.260 ******* 2025-02-10 09:24:20.573217 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:24:20.573231 | orchestrator | 2025-02-10 09:24:20.573246 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-02-10 09:24:20.573268 | orchestrator | Monday 10 February 2025 09:21:39 +0000 (0:00:02.615) 0:00:07.876 ******* 2025-02-10 09:24:20.573282 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:24:20.573296 | orchestrator | 2025-02-10 09:24:20.573310 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-02-10 09:24:20.573324 | orchestrator | Monday 10 February 2025 09:21:42 +0000 (0:00:02.946) 0:00:10.823 ******* 2025-02-10 09:24:20.573338 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:24:20.573352 | orchestrator | 2025-02-10 09:24:20.573366 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-02-10 09:24:20.573380 | orchestrator | Monday 10 February 2025 09:21:43 +0000 (0:00:01.032) 0:00:11.855 ******* 2025-02-10 09:24:20.573393 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:24:20.573408 | orchestrator | 2025-02-10 09:24:20.573422 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-02-10 09:24:20.573436 | orchestrator | Monday 10 February 2025 09:21:44 +0000 (0:00:00.849) 0:00:12.704 ******* 2025-02-10 09:24:20.573448 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:24:20.573465 | orchestrator | 2025-02-10 09:24:20.573478 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-02-10 09:24:20.573490 | orchestrator | Monday 10 February 2025 09:21:45 +0000 (0:00:00.405) 0:00:13.110 ******* 2025-02-10 09:24:20.573502 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:24:20.573515 | orchestrator | 2025-02-10 09:24:20.573527 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-02-10 09:24:20.573539 | orchestrator | Monday 10 February 2025 09:21:45 +0000 (0:00:00.641) 0:00:13.751 ******* 2025-02-10 09:24:20.573551 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:24:20.573570 | orchestrator | 2025-02-10 09:24:20.573582 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-02-10 09:24:20.573595 | orchestrator | Monday 10 February 2025 09:21:47 +0000 (0:00:01.646) 0:00:15.398 ******* 2025-02-10 09:24:20.573607 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:24:20.573619 | orchestrator | 2025-02-10 09:24:20.573632 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-02-10 09:24:20.573644 | orchestrator | Monday 10 February 2025 09:21:48 +0000 (0:00:01.218) 0:00:16.616 ******* 2025-02-10 09:24:20.573656 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:24:20.573669 | orchestrator | 2025-02-10 09:24:20.573682 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-02-10 09:24:20.573694 | orchestrator | Monday 10 February 2025 09:21:49 +0000 (0:00:00.506) 0:00:17.122 ******* 2025-02-10 09:24:20.573706 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:24:20.573740 | orchestrator | 2025-02-10 09:24:20.573764 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-02-10 09:24:20.573777 | orchestrator | Monday 10 February 2025 09:21:50 +0000 (0:00:01.169) 0:00:18.292 ******* 2025-02-10 09:24:20.573794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-10 09:24:20.573812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-10 09:24:20.573826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-10 09:24:20.573846 | orchestrator | 2025-02-10 09:24:20.573859 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-02-10 09:24:20.573871 | orchestrator | Monday 10 February 2025 09:21:52 +0000 (0:00:02.232) 0:00:20.524 ******* 2025-02-10 09:24:20.573892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-10 09:24:20.573906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-10 09:24:20.573920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-10 09:24:20.573933 | orchestrator | 2025-02-10 09:24:20.573952 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-02-10 09:24:20.573964 | orchestrator | Monday 10 February 2025 09:21:56 +0000 (0:00:04.359) 0:00:24.884 ******* 2025-02-10 09:24:20.573976 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-02-10 09:24:20.573988 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-02-10 09:24:20.574001 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-02-10 09:24:20.574013 | orchestrator | 2025-02-10 09:24:20.574070 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-02-10 09:24:20.574089 | orchestrator | Monday 10 February 2025 09:22:01 +0000 (0:00:04.139) 0:00:29.023 ******* 2025-02-10 09:24:20.574101 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-02-10 09:24:20.574114 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-02-10 09:24:20.574126 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-02-10 09:24:20.574138 | orchestrator | 2025-02-10 09:24:20.574150 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-02-10 09:24:20.574162 | orchestrator | Monday 10 February 2025 09:22:05 +0000 (0:00:04.088) 0:00:33.112 ******* 2025-02-10 09:24:20.574174 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-02-10 09:24:20.574187 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-02-10 09:24:20.574199 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-02-10 09:24:20.574211 | orchestrator | 2025-02-10 09:24:20.574229 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-02-10 09:24:20.574242 | orchestrator | Monday 10 February 2025 09:22:08 +0000 (0:00:03.793) 0:00:36.905 ******* 2025-02-10 09:24:20.574254 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-02-10 09:24:20.574266 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-02-10 09:24:20.574279 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-02-10 09:24:20.574291 | orchestrator | 2025-02-10 09:24:20.574307 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-02-10 09:24:20.574320 | orchestrator | Monday 10 February 2025 09:22:13 +0000 (0:00:04.164) 0:00:41.070 ******* 2025-02-10 09:24:20.574332 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-02-10 09:24:20.574344 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-02-10 09:24:20.574357 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-02-10 09:24:20.574369 | orchestrator | 2025-02-10 09:24:20.574381 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-02-10 09:24:20.574393 | orchestrator | Monday 10 February 2025 09:22:16 +0000 (0:00:02.940) 0:00:44.010 ******* 2025-02-10 09:24:20.574406 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-02-10 09:24:20.574418 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-02-10 09:24:20.574430 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-02-10 09:24:20.574442 | orchestrator | 2025-02-10 09:24:20.574454 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-02-10 09:24:20.574467 | orchestrator | Monday 10 February 2025 09:22:19 +0000 (0:00:03.413) 0:00:47.424 ******* 2025-02-10 09:24:20.574479 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:24:20.574498 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:24:20.574510 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:24:20.574522 | orchestrator | 2025-02-10 09:24:20.574534 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-02-10 09:24:20.574546 | orchestrator | Monday 10 February 2025 09:22:20 +0000 (0:00:01.037) 0:00:48.462 ******* 2025-02-10 09:24:20.574559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-10 09:24:20.574573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-10 09:24:20.574594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-10 09:24:20.574608 | orchestrator | 2025-02-10 09:24:20.574620 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-02-10 09:24:20.574632 | orchestrator | Monday 10 February 2025 09:22:22 +0000 (0:00:02.173) 0:00:50.635 ******* 2025-02-10 09:24:20.574644 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:24:20.574656 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:24:20.574675 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:24:20.574687 | orchestrator | 2025-02-10 09:24:20.574700 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-02-10 09:24:20.574712 | orchestrator | Monday 10 February 2025 09:22:24 +0000 (0:00:01.368) 0:00:52.008 ******* 2025-02-10 09:24:20.574745 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:24:20.574759 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:24:20.574771 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:24:20.574784 | orchestrator | 2025-02-10 09:24:20.574796 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-02-10 09:24:20.574808 | orchestrator | Monday 10 February 2025 09:22:30 +0000 (0:00:06.862) 0:00:58.870 ******* 2025-02-10 09:24:20.574820 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:24:20.574833 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:24:20.574845 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:24:20.574857 | orchestrator | 2025-02-10 09:24:20.574869 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-02-10 09:24:20.574882 | orchestrator | 2025-02-10 09:24:20.574894 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-02-10 09:24:20.574906 | orchestrator | Monday 10 February 2025 09:22:31 +0000 (0:00:00.387) 0:00:59.258 ******* 2025-02-10 09:24:20.574918 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:24:20.574930 | orchestrator | 2025-02-10 09:24:20.574943 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-02-10 09:24:20.574955 | orchestrator | Monday 10 February 2025 09:22:31 +0000 (0:00:00.666) 0:00:59.925 ******* 2025-02-10 09:24:20.574967 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:24:20.574979 | orchestrator | 2025-02-10 09:24:20.574992 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-02-10 09:24:20.575004 | orchestrator | Monday 10 February 2025 09:22:32 +0000 (0:00:00.585) 0:01:00.510 ******* 2025-02-10 09:24:20.575016 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:24:20.575028 | orchestrator | 2025-02-10 09:24:20.575040 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-02-10 09:24:20.575052 | orchestrator | Monday 10 February 2025 09:22:34 +0000 (0:00:02.379) 0:01:02.890 ******* 2025-02-10 09:24:20.575064 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:24:20.575077 | orchestrator | 2025-02-10 09:24:20.575089 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-02-10 09:24:20.575101 | orchestrator | 2025-02-10 09:24:20.575113 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-02-10 09:24:20.575125 | orchestrator | Monday 10 February 2025 09:23:30 +0000 (0:00:55.888) 0:01:58.778 ******* 2025-02-10 09:24:20.575137 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:24:20.575149 | orchestrator | 2025-02-10 09:24:20.575162 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-02-10 09:24:20.575174 | orchestrator | Monday 10 February 2025 09:23:31 +0000 (0:00:01.115) 0:01:59.894 ******* 2025-02-10 09:24:20.575186 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:24:20.575198 | orchestrator | 2025-02-10 09:24:20.575210 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-02-10 09:24:20.575222 | orchestrator | Monday 10 February 2025 09:23:32 +0000 (0:00:00.520) 0:02:00.415 ******* 2025-02-10 09:24:20.575235 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:24:20.575247 | orchestrator | 2025-02-10 09:24:20.575259 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-02-10 09:24:20.575271 | orchestrator | Monday 10 February 2025 09:23:35 +0000 (0:00:02.874) 0:02:03.289 ******* 2025-02-10 09:24:20.575284 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:24:20.575296 | orchestrator | 2025-02-10 09:24:20.575313 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-02-10 09:24:20.575326 | orchestrator | 2025-02-10 09:24:20.575338 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-02-10 09:24:20.575350 | orchestrator | Monday 10 February 2025 09:23:51 +0000 (0:00:16.237) 0:02:19.527 ******* 2025-02-10 09:24:20.575376 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:24:20.575388 | orchestrator | 2025-02-10 09:24:20.575400 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-02-10 09:24:20.575413 | orchestrator | Monday 10 February 2025 09:23:52 +0000 (0:00:00.815) 0:02:20.342 ******* 2025-02-10 09:24:20.575425 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:24:20.575437 | orchestrator | 2025-02-10 09:24:20.575455 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-02-10 09:24:20.575468 | orchestrator | Monday 10 February 2025 09:23:53 +0000 (0:00:00.730) 0:02:21.073 ******* 2025-02-10 09:24:20.575480 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:24:20.575492 | orchestrator | 2025-02-10 09:24:20.575505 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-02-10 09:24:20.575517 | orchestrator | Monday 10 February 2025 09:24:00 +0000 (0:00:07.602) 0:02:28.676 ******* 2025-02-10 09:24:20.575529 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:24:20.575541 | orchestrator | 2025-02-10 09:24:20.575553 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-02-10 09:24:20.575565 | orchestrator | 2025-02-10 09:24:20.575577 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-02-10 09:24:20.575590 | orchestrator | Monday 10 February 2025 09:24:13 +0000 (0:00:12.504) 0:02:41.180 ******* 2025-02-10 09:24:20.575602 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:24:20.575621 | orchestrator | 2025-02-10 09:24:20.575640 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-02-10 09:24:20.575661 | orchestrator | Monday 10 February 2025 09:24:14 +0000 (0:00:01.753) 0:02:42.934 ******* 2025-02-10 09:24:20.575682 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-02-10 09:24:20.575695 | orchestrator | enable_outward_rabbitmq_True 2025-02-10 09:24:20.575708 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-02-10 09:24:20.575790 | orchestrator | outward_rabbitmq_restart 2025-02-10 09:24:20.575806 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:24:20.575819 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:24:20.575830 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:24:20.575843 | orchestrator | 2025-02-10 09:24:20.575855 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-02-10 09:24:20.575867 | orchestrator | skipping: no hosts matched 2025-02-10 09:24:20.575879 | orchestrator | 2025-02-10 09:24:20.575892 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-02-10 09:24:20.575904 | orchestrator | skipping: no hosts matched 2025-02-10 09:24:20.575916 | orchestrator | 2025-02-10 09:24:20.575928 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-02-10 09:24:20.575940 | orchestrator | skipping: no hosts matched 2025-02-10 09:24:20.575952 | orchestrator | 2025-02-10 09:24:20.575965 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:24:20.575977 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-02-10 09:24:20.575990 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-02-10 09:24:20.576003 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:24:20.576016 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:24:20.576028 | orchestrator | 2025-02-10 09:24:20.576040 | orchestrator | 2025-02-10 09:24:20.576052 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:24:20.576064 | orchestrator | Monday 10 February 2025 09:24:18 +0000 (0:00:03.555) 0:02:46.489 ******* 2025-02-10 09:24:20.576085 | orchestrator | =============================================================================== 2025-02-10 09:24:20.576097 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 84.63s 2025-02-10 09:24:20.576109 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 12.86s 2025-02-10 09:24:20.576121 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.86s 2025-02-10 09:24:20.576133 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 4.36s 2025-02-10 09:24:20.576145 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 4.16s 2025-02-10 09:24:20.576158 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 4.13s 2025-02-10 09:24:20.576170 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 4.09s 2025-02-10 09:24:20.576182 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 3.79s 2025-02-10 09:24:20.576194 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.56s 2025-02-10 09:24:20.576212 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 3.41s 2025-02-10 09:24:20.576224 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.24s 2025-02-10 09:24:20.576237 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.95s 2025-02-10 09:24:20.576249 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.94s 2025-02-10 09:24:20.576261 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 2.62s 2025-02-10 09:24:20.576272 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.60s 2025-02-10 09:24:20.576282 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 2.23s 2025-02-10 09:24:20.576292 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.17s 2025-02-10 09:24:20.576302 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.84s 2025-02-10 09:24:20.576312 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 1.75s 2025-02-10 09:24:20.576322 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.65s 2025-02-10 09:24:20.576337 | orchestrator | 2025-02-10 09:24:20 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:24:23.618122 | orchestrator | 2025-02-10 09:24:20 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:24:23.619214 | orchestrator | 2025-02-10 09:24:20 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:24:23.619461 | orchestrator | 2025-02-10 09:24:23 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:24:23.622580 | orchestrator | 2025-02-10 09:24:23 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:24:23.622757 | orchestrator | 2025-02-10 09:24:23 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:24:26.681505 | orchestrator | 2025-02-10 09:24:23 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:24:26.681675 | orchestrator | 2025-02-10 09:24:26 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:24:26.683999 | orchestrator | 2025-02-10 09:24:26 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:24:26.684950 | orchestrator | 2025-02-10 09:24:26 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:24:26.685081 | orchestrator | 2025-02-10 09:24:26 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:24:29.734202 | orchestrator | 2025-02-10 09:24:29 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:24:32.792269 | orchestrator | 2025-02-10 09:24:29 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:24:32.792437 | orchestrator | 2025-02-10 09:24:29 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:24:32.792459 | orchestrator | 2025-02-10 09:24:29 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:24:32.792497 | orchestrator | 2025-02-10 09:24:32 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:24:32.793010 | orchestrator | 2025-02-10 09:24:32 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:24:32.793045 | orchestrator | 2025-02-10 09:24:32 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:24:32.793153 | orchestrator | 2025-02-10 09:24:32 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:24:35.842416 | orchestrator | 2025-02-10 09:24:35 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:24:35.844618 | orchestrator | 2025-02-10 09:24:35 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:24:35.846239 | orchestrator | 2025-02-10 09:24:35 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:24:38.905893 | orchestrator | 2025-02-10 09:24:35 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:24:38.906165 | orchestrator | 2025-02-10 09:24:38 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:24:38.909864 | orchestrator | 2025-02-10 09:24:38 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:24:41.980131 | orchestrator | 2025-02-10 09:24:38 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:24:41.980249 | orchestrator | 2025-02-10 09:24:38 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:24:41.980279 | orchestrator | 2025-02-10 09:24:41 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:24:41.981242 | orchestrator | 2025-02-10 09:24:41 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:24:41.982335 | orchestrator | 2025-02-10 09:24:41 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:24:45.044292 | orchestrator | 2025-02-10 09:24:41 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:24:45.044483 | orchestrator | 2025-02-10 09:24:45 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:24:45.045112 | orchestrator | 2025-02-10 09:24:45 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:24:45.046415 | orchestrator | 2025-02-10 09:24:45 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:24:48.099246 | orchestrator | 2025-02-10 09:24:45 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:24:48.099436 | orchestrator | 2025-02-10 09:24:48 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:24:48.102269 | orchestrator | 2025-02-10 09:24:48 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:24:48.102311 | orchestrator | 2025-02-10 09:24:48 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:24:51.147410 | orchestrator | 2025-02-10 09:24:48 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:24:51.147566 | orchestrator | 2025-02-10 09:24:51 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:24:54.197876 | orchestrator | 2025-02-10 09:24:51 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:24:54.198235 | orchestrator | 2025-02-10 09:24:51 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:24:54.198266 | orchestrator | 2025-02-10 09:24:51 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:24:54.198302 | orchestrator | 2025-02-10 09:24:54 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:24:54.198865 | orchestrator | 2025-02-10 09:24:54 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:24:54.198904 | orchestrator | 2025-02-10 09:24:54 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:24:57.268193 | orchestrator | 2025-02-10 09:24:54 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:24:57.268654 | orchestrator | 2025-02-10 09:24:57 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:25:00.321063 | orchestrator | 2025-02-10 09:24:57 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:25:00.321213 | orchestrator | 2025-02-10 09:24:57 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:25:00.321236 | orchestrator | 2025-02-10 09:24:57 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:00.321274 | orchestrator | 2025-02-10 09:25:00 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:25:00.323866 | orchestrator | 2025-02-10 09:25:00 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:25:00.326145 | orchestrator | 2025-02-10 09:25:00 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:25:00.326184 | orchestrator | 2025-02-10 09:25:00 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:03.400562 | orchestrator | 2025-02-10 09:25:03 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:25:03.403010 | orchestrator | 2025-02-10 09:25:03 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:25:03.403074 | orchestrator | 2025-02-10 09:25:03 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:25:06.469291 | orchestrator | 2025-02-10 09:25:03 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:06.469481 | orchestrator | 2025-02-10 09:25:06 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:25:06.469623 | orchestrator | 2025-02-10 09:25:06 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:25:06.472533 | orchestrator | 2025-02-10 09:25:06 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:25:09.522460 | orchestrator | 2025-02-10 09:25:06 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:09.522628 | orchestrator | 2025-02-10 09:25:09 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:25:12.567372 | orchestrator | 2025-02-10 09:25:09 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:25:12.567522 | orchestrator | 2025-02-10 09:25:09 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:25:12.567566 | orchestrator | 2025-02-10 09:25:09 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:12.567604 | orchestrator | 2025-02-10 09:25:12 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:25:12.568145 | orchestrator | 2025-02-10 09:25:12 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:25:12.573710 | orchestrator | 2025-02-10 09:25:12 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:25:15.620880 | orchestrator | 2025-02-10 09:25:12 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:15.621151 | orchestrator | 2025-02-10 09:25:15 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:25:15.621188 | orchestrator | 2025-02-10 09:25:15 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:25:15.626114 | orchestrator | 2025-02-10 09:25:15 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:25:18.667077 | orchestrator | 2025-02-10 09:25:15 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:18.667247 | orchestrator | 2025-02-10 09:25:18 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:25:18.670651 | orchestrator | 2025-02-10 09:25:18 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:25:18.673562 | orchestrator | 2025-02-10 09:25:18 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:25:21.736218 | orchestrator | 2025-02-10 09:25:18 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:21.736352 | orchestrator | 2025-02-10 09:25:21 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:25:21.737103 | orchestrator | 2025-02-10 09:25:21 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:25:21.737133 | orchestrator | 2025-02-10 09:25:21 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:25:24.778668 | orchestrator | 2025-02-10 09:25:21 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:24.778923 | orchestrator | 2025-02-10 09:25:24 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:25:24.779575 | orchestrator | 2025-02-10 09:25:24 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:25:24.779611 | orchestrator | 2025-02-10 09:25:24 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:25:27.823962 | orchestrator | 2025-02-10 09:25:24 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:27.824047 | orchestrator | 2025-02-10 09:25:27 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:25:27.825132 | orchestrator | 2025-02-10 09:25:27 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:25:27.828310 | orchestrator | 2025-02-10 09:25:27 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state STARTED 2025-02-10 09:25:27.831545 | orchestrator | 2025-02-10 09:25:27 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:30.886446 | orchestrator | 2025-02-10 09:25:30 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:25:30.891319 | orchestrator | 2025-02-10 09:25:30 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:25:30.896924 | orchestrator | 2025-02-10 09:25:30.896986 | orchestrator | 2025-02-10 09:25:30.897003 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:25:30.897019 | orchestrator | 2025-02-10 09:25:30.897034 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:25:30.897049 | orchestrator | Monday 10 February 2025 09:22:39 +0000 (0:00:00.211) 0:00:00.211 ******* 2025-02-10 09:25:30.897063 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:25:30.897526 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:25:30.897556 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:25:30.897572 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:25:30.897594 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:25:30.897618 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:25:30.897681 | orchestrator | 2025-02-10 09:25:30.897706 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:25:30.897729 | orchestrator | Monday 10 February 2025 09:22:40 +0000 (0:00:01.022) 0:00:01.233 ******* 2025-02-10 09:25:30.897743 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-02-10 09:25:30.897805 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-02-10 09:25:30.897834 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-02-10 09:25:30.897862 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-02-10 09:25:30.897887 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-02-10 09:25:30.897903 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-02-10 09:25:30.897917 | orchestrator | 2025-02-10 09:25:30.897931 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-02-10 09:25:30.897945 | orchestrator | 2025-02-10 09:25:30.897958 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-02-10 09:25:30.897972 | orchestrator | Monday 10 February 2025 09:22:43 +0000 (0:00:03.712) 0:00:04.946 ******* 2025-02-10 09:25:30.897987 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:25:30.898002 | orchestrator | 2025-02-10 09:25:30.898076 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-02-10 09:25:30.898092 | orchestrator | Monday 10 February 2025 09:22:46 +0000 (0:00:02.409) 0:00:07.355 ******* 2025-02-10 09:25:30.898113 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.898135 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.898153 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.898170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.898186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.898225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.898269 | orchestrator | 2025-02-10 09:25:30.898287 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-02-10 09:25:30.898302 | orchestrator | Monday 10 February 2025 09:22:48 +0000 (0:00:02.003) 0:00:09.359 ******* 2025-02-10 09:25:30.898318 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.898336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.898368 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.898391 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.898408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.898424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.898440 | orchestrator | 2025-02-10 09:25:30.898456 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-02-10 09:25:30.898472 | orchestrator | Monday 10 February 2025 09:22:50 +0000 (0:00:02.538) 0:00:11.898 ******* 2025-02-10 09:25:30.898488 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.898512 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.898534 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.898549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.898568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.898583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.898596 | orchestrator | 2025-02-10 09:25:30.898610 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-02-10 09:25:30.898624 | orchestrator | Monday 10 February 2025 09:22:52 +0000 (0:00:01.365) 0:00:13.264 ******* 2025-02-10 09:25:30.898638 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.898652 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.898666 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.898702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.898723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.898738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.898774 | orchestrator | 2025-02-10 09:25:30.898789 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-02-10 09:25:30.898803 | orchestrator | Monday 10 February 2025 09:22:54 +0000 (0:00:02.001) 0:00:15.266 ******* 2025-02-10 09:25:30.898822 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.898837 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.898851 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.898866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.898880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.898901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.898915 | orchestrator | 2025-02-10 09:25:30.898929 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-02-10 09:25:30.898943 | orchestrator | Monday 10 February 2025 09:22:55 +0000 (0:00:01.632) 0:00:16.898 ******* 2025-02-10 09:25:30.898957 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:25:30.898972 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:25:30.898986 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:25:30.899000 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:25:30.899014 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:25:30.899029 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:25:30.899050 | orchestrator | 2025-02-10 09:25:30.899065 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-02-10 09:25:30.899083 | orchestrator | Monday 10 February 2025 09:22:59 +0000 (0:00:03.279) 0:00:20.177 ******* 2025-02-10 09:25:30.899104 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-02-10 09:25:30.899119 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-02-10 09:25:30.899133 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-02-10 09:25:30.899147 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-02-10 09:25:30.899161 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-02-10 09:25:30.899175 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-02-10 09:25:30.899188 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-02-10 09:25:30.899202 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-02-10 09:25:30.899216 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-02-10 09:25:30.899230 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-02-10 09:25:30.899243 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-02-10 09:25:30.899257 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-02-10 09:25:30.899271 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-02-10 09:25:30.899287 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-02-10 09:25:30.899301 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-02-10 09:25:30.899315 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-02-10 09:25:30.899329 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-02-10 09:25:30.899343 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-02-10 09:25:30.899365 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-02-10 09:25:30.899381 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-02-10 09:25:30.899395 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-02-10 09:25:30.899408 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-02-10 09:25:30.899422 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-02-10 09:25:30.899436 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-02-10 09:25:30.899450 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-02-10 09:25:30.899463 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-02-10 09:25:30.899477 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-02-10 09:25:30.899491 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-02-10 09:25:30.899504 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-02-10 09:25:30.899518 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-02-10 09:25:30.899531 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-02-10 09:25:30.899545 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-02-10 09:25:30.899559 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-02-10 09:25:30.899573 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-02-10 09:25:30.899590 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-02-10 09:25:30.899615 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-02-10 09:25:30.899642 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-02-10 09:25:30.899669 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-02-10 09:25:30.899694 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-02-10 09:25:30.899709 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-02-10 09:25:30.899722 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-02-10 09:25:30.899736 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-02-10 09:25:30.899750 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-02-10 09:25:30.899783 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-02-10 09:25:30.899797 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-02-10 09:25:30.899812 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-02-10 09:25:30.899826 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-02-10 09:25:30.899848 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-02-10 09:25:30.899862 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-02-10 09:25:30.899876 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-02-10 09:25:30.899890 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-02-10 09:25:30.899904 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-02-10 09:25:30.899918 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-02-10 09:25:30.899932 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-02-10 09:25:30.899946 | orchestrator | 2025-02-10 09:25:30.899960 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-02-10 09:25:30.899974 | orchestrator | Monday 10 February 2025 09:23:23 +0000 (0:00:24.787) 0:00:44.965 ******* 2025-02-10 09:25:30.899988 | orchestrator | 2025-02-10 09:25:30.900002 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-02-10 09:25:30.900016 | orchestrator | Monday 10 February 2025 09:23:23 +0000 (0:00:00.139) 0:00:45.105 ******* 2025-02-10 09:25:30.900030 | orchestrator | 2025-02-10 09:25:30.900043 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-02-10 09:25:30.900057 | orchestrator | Monday 10 February 2025 09:23:24 +0000 (0:00:00.110) 0:00:45.216 ******* 2025-02-10 09:25:30.900071 | orchestrator | 2025-02-10 09:25:30.900085 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-02-10 09:25:30.900099 | orchestrator | Monday 10 February 2025 09:23:24 +0000 (0:00:00.366) 0:00:45.582 ******* 2025-02-10 09:25:30.900112 | orchestrator | 2025-02-10 09:25:30.900126 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-02-10 09:25:30.900140 | orchestrator | Monday 10 February 2025 09:23:24 +0000 (0:00:00.065) 0:00:45.648 ******* 2025-02-10 09:25:30.900154 | orchestrator | 2025-02-10 09:25:30.900168 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-02-10 09:25:30.900182 | orchestrator | Monday 10 February 2025 09:23:24 +0000 (0:00:00.197) 0:00:45.846 ******* 2025-02-10 09:25:30.900196 | orchestrator | 2025-02-10 09:25:30.900210 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-02-10 09:25:30.900223 | orchestrator | Monday 10 February 2025 09:23:24 +0000 (0:00:00.071) 0:00:45.917 ******* 2025-02-10 09:25:30.900237 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:25:30.900255 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:25:30.900282 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:25:30.900313 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:25:30.900338 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:25:30.900365 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:25:30.900392 | orchestrator | 2025-02-10 09:25:30.900420 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-02-10 09:25:30.900446 | orchestrator | Monday 10 February 2025 09:23:27 +0000 (0:00:02.977) 0:00:48.895 ******* 2025-02-10 09:25:30.900473 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:25:30.900498 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:25:30.900525 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:25:30.900549 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:25:30.900576 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:25:30.900599 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:25:30.900625 | orchestrator | 2025-02-10 09:25:30.900652 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-02-10 09:25:30.900679 | orchestrator | 2025-02-10 09:25:30.900707 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-02-10 09:25:30.900745 | orchestrator | Monday 10 February 2025 09:23:44 +0000 (0:00:16.441) 0:01:05.336 ******* 2025-02-10 09:25:30.900842 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:25:30.900868 | orchestrator | 2025-02-10 09:25:30.900911 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-02-10 09:25:30.900936 | orchestrator | Monday 10 February 2025 09:23:45 +0000 (0:00:00.872) 0:01:06.208 ******* 2025-02-10 09:25:30.900951 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:25:30.900965 | orchestrator | 2025-02-10 09:25:30.900989 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-02-10 09:25:30.901013 | orchestrator | Monday 10 February 2025 09:23:47 +0000 (0:00:02.487) 0:01:08.696 ******* 2025-02-10 09:25:30.901037 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:25:30.901061 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:25:30.901084 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:25:30.901108 | orchestrator | 2025-02-10 09:25:30.901131 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-02-10 09:25:30.901153 | orchestrator | Monday 10 February 2025 09:23:49 +0000 (0:00:01.504) 0:01:10.201 ******* 2025-02-10 09:25:30.901168 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:25:30.901182 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:25:30.901195 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:25:30.901209 | orchestrator | 2025-02-10 09:25:30.901223 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-02-10 09:25:30.901237 | orchestrator | Monday 10 February 2025 09:23:50 +0000 (0:00:01.471) 0:01:11.672 ******* 2025-02-10 09:25:30.901251 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:25:30.901265 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:25:30.901284 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:25:30.901298 | orchestrator | 2025-02-10 09:25:30.901312 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-02-10 09:25:30.901327 | orchestrator | Monday 10 February 2025 09:23:51 +0000 (0:00:01.281) 0:01:12.954 ******* 2025-02-10 09:25:30.901341 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:25:30.901354 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:25:30.901368 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:25:30.901381 | orchestrator | 2025-02-10 09:25:30.901395 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-02-10 09:25:30.901408 | orchestrator | Monday 10 February 2025 09:23:52 +0000 (0:00:00.999) 0:01:13.953 ******* 2025-02-10 09:25:30.901420 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:25:30.901432 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:25:30.901445 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:25:30.901456 | orchestrator | 2025-02-10 09:25:30.901469 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-02-10 09:25:30.901481 | orchestrator | Monday 10 February 2025 09:23:53 +0000 (0:00:00.775) 0:01:14.729 ******* 2025-02-10 09:25:30.901493 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:25:30.901506 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:25:30.901518 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:25:30.901530 | orchestrator | 2025-02-10 09:25:30.901542 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-02-10 09:25:30.901554 | orchestrator | Monday 10 February 2025 09:23:54 +0000 (0:00:00.714) 0:01:15.444 ******* 2025-02-10 09:25:30.901567 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:25:30.901579 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:25:30.901591 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:25:30.901603 | orchestrator | 2025-02-10 09:25:30.901616 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-02-10 09:25:30.901628 | orchestrator | Monday 10 February 2025 09:23:54 +0000 (0:00:00.600) 0:01:16.045 ******* 2025-02-10 09:25:30.901641 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:25:30.901664 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:25:30.901677 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:25:30.901689 | orchestrator | 2025-02-10 09:25:30.901701 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-02-10 09:25:30.901713 | orchestrator | Monday 10 February 2025 09:23:55 +0000 (0:00:00.839) 0:01:16.884 ******* 2025-02-10 09:25:30.901725 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:25:30.901738 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:25:30.901750 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:25:30.901780 | orchestrator | 2025-02-10 09:25:30.901792 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-02-10 09:25:30.901805 | orchestrator | Monday 10 February 2025 09:23:56 +0000 (0:00:00.811) 0:01:17.696 ******* 2025-02-10 09:25:30.901818 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:25:30.901830 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:25:30.901842 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:25:30.901855 | orchestrator | 2025-02-10 09:25:30.901867 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-02-10 09:25:30.901879 | orchestrator | Monday 10 February 2025 09:23:57 +0000 (0:00:00.824) 0:01:18.520 ******* 2025-02-10 09:25:30.901892 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:25:30.901904 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:25:30.901916 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:25:30.901928 | orchestrator | 2025-02-10 09:25:30.901941 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-02-10 09:25:30.901954 | orchestrator | Monday 10 February 2025 09:23:58 +0000 (0:00:01.114) 0:01:19.634 ******* 2025-02-10 09:25:30.901966 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:25:30.901978 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:25:30.901990 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:25:30.902002 | orchestrator | 2025-02-10 09:25:30.902044 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-02-10 09:25:30.902059 | orchestrator | Monday 10 February 2025 09:24:00 +0000 (0:00:01.757) 0:01:21.392 ******* 2025-02-10 09:25:30.902071 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:25:30.902084 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:25:30.902096 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:25:30.902108 | orchestrator | 2025-02-10 09:25:30.902120 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-02-10 09:25:30.902133 | orchestrator | Monday 10 February 2025 09:24:02 +0000 (0:00:01.818) 0:01:23.211 ******* 2025-02-10 09:25:30.902145 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:25:30.902158 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:25:30.902170 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:25:30.902183 | orchestrator | 2025-02-10 09:25:30.902203 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-02-10 09:25:30.902216 | orchestrator | Monday 10 February 2025 09:24:03 +0000 (0:00:01.578) 0:01:24.789 ******* 2025-02-10 09:25:30.902229 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:25:30.902243 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:25:30.902256 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:25:30.902269 | orchestrator | 2025-02-10 09:25:30.902286 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-02-10 09:25:30.902303 | orchestrator | Monday 10 February 2025 09:24:05 +0000 (0:00:01.732) 0:01:26.522 ******* 2025-02-10 09:25:30.902316 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:25:30.902328 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:25:30.902340 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:25:30.902353 | orchestrator | 2025-02-10 09:25:30.902365 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-02-10 09:25:30.902377 | orchestrator | Monday 10 February 2025 09:24:06 +0000 (0:00:01.177) 0:01:27.699 ******* 2025-02-10 09:25:30.902390 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:25:30.902413 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:25:30.902425 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:25:30.902437 | orchestrator | 2025-02-10 09:25:30.902450 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-02-10 09:25:30.902462 | orchestrator | Monday 10 February 2025 09:24:07 +0000 (0:00:01.238) 0:01:28.938 ******* 2025-02-10 09:25:30.902474 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:25:30.902487 | orchestrator | 2025-02-10 09:25:30.902499 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-02-10 09:25:30.902511 | orchestrator | Monday 10 February 2025 09:24:10 +0000 (0:00:02.595) 0:01:31.533 ******* 2025-02-10 09:25:30.902524 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:25:30.902536 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:25:30.902548 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:25:30.902561 | orchestrator | 2025-02-10 09:25:30.902573 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-02-10 09:25:30.902585 | orchestrator | Monday 10 February 2025 09:24:10 +0000 (0:00:00.573) 0:01:32.107 ******* 2025-02-10 09:25:30.902597 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:25:30.902610 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:25:30.902622 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:25:30.902634 | orchestrator | 2025-02-10 09:25:30.902646 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-02-10 09:25:30.902659 | orchestrator | Monday 10 February 2025 09:24:11 +0000 (0:00:00.691) 0:01:32.799 ******* 2025-02-10 09:25:30.902671 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:25:30.902683 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:25:30.902695 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:25:30.902707 | orchestrator | 2025-02-10 09:25:30.902720 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-02-10 09:25:30.902732 | orchestrator | Monday 10 February 2025 09:24:12 +0000 (0:00:01.087) 0:01:33.886 ******* 2025-02-10 09:25:30.902745 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:25:30.902777 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:25:30.902790 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:25:30.902803 | orchestrator | 2025-02-10 09:25:30.902815 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-02-10 09:25:30.902828 | orchestrator | Monday 10 February 2025 09:24:14 +0000 (0:00:01.747) 0:01:35.633 ******* 2025-02-10 09:25:30.902840 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:25:30.902852 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:25:30.902864 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:25:30.902876 | orchestrator | 2025-02-10 09:25:30.902888 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-02-10 09:25:30.902901 | orchestrator | Monday 10 February 2025 09:24:15 +0000 (0:00:00.899) 0:01:36.533 ******* 2025-02-10 09:25:30.902913 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:25:30.902925 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:25:30.902938 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:25:30.902950 | orchestrator | 2025-02-10 09:25:30.902962 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-02-10 09:25:30.902974 | orchestrator | Monday 10 February 2025 09:24:16 +0000 (0:00:01.105) 0:01:37.638 ******* 2025-02-10 09:25:30.902986 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:25:30.902999 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:25:30.903011 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:25:30.903023 | orchestrator | 2025-02-10 09:25:30.903036 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-02-10 09:25:30.903048 | orchestrator | Monday 10 February 2025 09:24:17 +0000 (0:00:00.689) 0:01:38.327 ******* 2025-02-10 09:25:30.903060 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:25:30.903073 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:25:30.903086 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:25:30.903103 | orchestrator | 2025-02-10 09:25:30.903116 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-02-10 09:25:30.903128 | orchestrator | Monday 10 February 2025 09:24:17 +0000 (0:00:00.741) 0:01:39.069 ******* 2025-02-10 09:25:30.903143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.903164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.903177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.903191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.903204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.903216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.903229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.903241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.903254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.903272 | orchestrator | 2025-02-10 09:25:30.903285 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-02-10 09:25:30.903297 | orchestrator | Monday 10 February 2025 09:24:19 +0000 (0:00:01.697) 0:01:40.766 ******* 2025-02-10 09:25:30.903310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.903322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.903341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.903354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.903370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.903383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.903396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.903408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.903421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.903439 | orchestrator | 2025-02-10 09:25:30.903451 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-02-10 09:25:30.903464 | orchestrator | Monday 10 February 2025 09:24:25 +0000 (0:00:06.101) 0:01:46.868 ******* 2025-02-10 09:25:30.903476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.903496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.903514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.903527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.903539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.903552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.903565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.903581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.903600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.903612 | orchestrator | 2025-02-10 09:25:30.903625 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-02-10 09:25:30.903637 | orchestrator | Monday 10 February 2025 09:24:28 +0000 (0:00:02.975) 0:01:49.844 ******* 2025-02-10 09:25:30.903649 | orchestrator | 2025-02-10 09:25:30.903662 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-02-10 09:25:30.903678 | orchestrator | Monday 10 February 2025 09:24:28 +0000 (0:00:00.069) 0:01:49.913 ******* 2025-02-10 09:25:30.903690 | orchestrator | 2025-02-10 09:25:30.903703 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-02-10 09:25:30.903715 | orchestrator | Monday 10 February 2025 09:24:28 +0000 (0:00:00.070) 0:01:49.983 ******* 2025-02-10 09:25:30.903727 | orchestrator | 2025-02-10 09:25:30.903740 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-02-10 09:25:30.903773 | orchestrator | Monday 10 February 2025 09:24:28 +0000 (0:00:00.061) 0:01:50.044 ******* 2025-02-10 09:25:30.903787 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:25:30.903799 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:25:30.903811 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:25:30.903824 | orchestrator | 2025-02-10 09:25:30.903836 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-02-10 09:25:30.903848 | orchestrator | Monday 10 February 2025 09:24:31 +0000 (0:00:02.771) 0:01:52.815 ******* 2025-02-10 09:25:30.903860 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:25:30.903873 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:25:30.903885 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:25:30.903897 | orchestrator | 2025-02-10 09:25:30.903910 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-02-10 09:25:30.903922 | orchestrator | Monday 10 February 2025 09:24:34 +0000 (0:00:03.027) 0:01:55.843 ******* 2025-02-10 09:25:30.903934 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:25:30.903946 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:25:30.903958 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:25:30.903970 | orchestrator | 2025-02-10 09:25:30.903988 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-02-10 09:25:30.904001 | orchestrator | Monday 10 February 2025 09:24:42 +0000 (0:00:07.737) 0:02:03.581 ******* 2025-02-10 09:25:30.904014 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:25:30.904026 | orchestrator | 2025-02-10 09:25:30.904038 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-02-10 09:25:30.904050 | orchestrator | Monday 10 February 2025 09:24:42 +0000 (0:00:00.137) 0:02:03.719 ******* 2025-02-10 09:25:30.904062 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:25:30.904074 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:25:30.904087 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:25:30.904099 | orchestrator | 2025-02-10 09:25:30.904111 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-02-10 09:25:30.904123 | orchestrator | Monday 10 February 2025 09:24:43 +0000 (0:00:01.073) 0:02:04.792 ******* 2025-02-10 09:25:30.904135 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:25:30.904147 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:25:30.904160 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:25:30.904172 | orchestrator | 2025-02-10 09:25:30.904184 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-02-10 09:25:30.904196 | orchestrator | Monday 10 February 2025 09:24:44 +0000 (0:00:00.651) 0:02:05.444 ******* 2025-02-10 09:25:30.904215 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:25:30.904227 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:25:30.904244 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:25:30.904257 | orchestrator | 2025-02-10 09:25:30.904269 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-02-10 09:25:30.904281 | orchestrator | Monday 10 February 2025 09:24:45 +0000 (0:00:01.008) 0:02:06.452 ******* 2025-02-10 09:25:30.904294 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:25:30.904306 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:25:30.904318 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:25:30.904330 | orchestrator | 2025-02-10 09:25:30.904343 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-02-10 09:25:30.904355 | orchestrator | Monday 10 February 2025 09:24:45 +0000 (0:00:00.648) 0:02:07.100 ******* 2025-02-10 09:25:30.904367 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:25:30.904379 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:25:30.904391 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:25:30.904403 | orchestrator | 2025-02-10 09:25:30.904416 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-02-10 09:25:30.904428 | orchestrator | Monday 10 February 2025 09:24:47 +0000 (0:00:01.407) 0:02:08.508 ******* 2025-02-10 09:25:30.904440 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:25:30.904452 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:25:30.904464 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:25:30.904476 | orchestrator | 2025-02-10 09:25:30.904489 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-02-10 09:25:30.904501 | orchestrator | Monday 10 February 2025 09:24:48 +0000 (0:00:00.883) 0:02:09.391 ******* 2025-02-10 09:25:30.904513 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:25:30.904525 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:25:30.904537 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:25:30.904549 | orchestrator | 2025-02-10 09:25:30.904562 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-02-10 09:25:30.904574 | orchestrator | Monday 10 February 2025 09:24:48 +0000 (0:00:00.638) 0:02:10.030 ******* 2025-02-10 09:25:30.904587 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.904601 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.904614 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.904627 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.904644 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.904663 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.904675 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.904688 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.904703 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.904716 | orchestrator | 2025-02-10 09:25:30.904729 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-02-10 09:25:30.904741 | orchestrator | Monday 10 February 2025 09:24:50 +0000 (0:00:01.967) 0:02:11.997 ******* 2025-02-10 09:25:30.904768 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.904781 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.904794 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.904806 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.904819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.904843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.904857 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.904870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.904882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.904894 | orchestrator | 2025-02-10 09:25:30.904907 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-02-10 09:25:30.904919 | orchestrator | Monday 10 February 2025 09:24:55 +0000 (0:00:04.651) 0:02:16.649 ******* 2025-02-10 09:25:30.904932 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.904944 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.904960 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.904973 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.904991 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.905010 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.905022 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.905035 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.905047 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:25:30.905060 | orchestrator | 2025-02-10 09:25:30.905072 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-02-10 09:25:30.905089 | orchestrator | Monday 10 February 2025 09:25:00 +0000 (0:00:04.607) 0:02:21.257 ******* 2025-02-10 09:25:30.905102 | orchestrator | 2025-02-10 09:25:30.905115 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-02-10 09:25:30.905127 | orchestrator | Monday 10 February 2025 09:25:00 +0000 (0:00:00.243) 0:02:21.502 ******* 2025-02-10 09:25:30.905139 | orchestrator | 2025-02-10 09:25:30.905152 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-02-10 09:25:30.905164 | orchestrator | Monday 10 February 2025 09:25:00 +0000 (0:00:00.515) 0:02:22.017 ******* 2025-02-10 09:25:30.905176 | orchestrator | 2025-02-10 09:25:30.905188 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-02-10 09:25:30.905200 | orchestrator | Monday 10 February 2025 09:25:00 +0000 (0:00:00.088) 0:02:22.106 ******* 2025-02-10 09:25:30.905213 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:25:30.905225 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:25:30.905238 | orchestrator | 2025-02-10 09:25:30.905250 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-02-10 09:25:30.905262 | orchestrator | Monday 10 February 2025 09:25:07 +0000 (0:00:06.478) 0:02:28.585 ******* 2025-02-10 09:25:30.905275 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:25:30.905287 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:25:30.905299 | orchestrator | 2025-02-10 09:25:30.905312 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-02-10 09:25:30.905324 | orchestrator | Monday 10 February 2025 09:25:14 +0000 (0:00:07.319) 0:02:35.904 ******* 2025-02-10 09:25:30.905343 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:25:30.905355 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:25:30.905367 | orchestrator | 2025-02-10 09:25:30.905380 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-02-10 09:25:30.905392 | orchestrator | Monday 10 February 2025 09:25:21 +0000 (0:00:06.407) 0:02:42.312 ******* 2025-02-10 09:25:30.905405 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:25:30.905417 | orchestrator | 2025-02-10 09:25:30.905429 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-02-10 09:25:30.905442 | orchestrator | Monday 10 February 2025 09:25:21 +0000 (0:00:00.359) 0:02:42.672 ******* 2025-02-10 09:25:30.905454 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:25:30.905466 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:25:30.905478 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:25:30.905490 | orchestrator | 2025-02-10 09:25:30.905502 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-02-10 09:25:30.905514 | orchestrator | Monday 10 February 2025 09:25:22 +0000 (0:00:00.903) 0:02:43.576 ******* 2025-02-10 09:25:30.905527 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:25:30.905539 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:25:30.905551 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:25:30.905563 | orchestrator | 2025-02-10 09:25:30.905575 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-02-10 09:25:30.905588 | orchestrator | Monday 10 February 2025 09:25:23 +0000 (0:00:00.699) 0:02:44.275 ******* 2025-02-10 09:25:30.905600 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:25:30.905612 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:25:30.905624 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:25:30.905637 | orchestrator | 2025-02-10 09:25:30.905649 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-02-10 09:25:30.905662 | orchestrator | Monday 10 February 2025 09:25:24 +0000 (0:00:01.709) 0:02:45.984 ******* 2025-02-10 09:25:30.905674 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:25:30.905686 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:25:30.905699 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:25:30.905711 | orchestrator | 2025-02-10 09:25:30.905723 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-02-10 09:25:30.905735 | orchestrator | Monday 10 February 2025 09:25:25 +0000 (0:00:00.947) 0:02:46.932 ******* 2025-02-10 09:25:30.905748 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:25:30.905783 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:25:33.964441 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:25:33.964591 | orchestrator | 2025-02-10 09:25:33.964610 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-02-10 09:25:33.964622 | orchestrator | Monday 10 February 2025 09:25:26 +0000 (0:00:00.939) 0:02:47.871 ******* 2025-02-10 09:25:33.964632 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:25:33.964642 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:25:33.964651 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:25:33.964660 | orchestrator | 2025-02-10 09:25:33.964670 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:25:33.964681 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-02-10 09:25:33.964693 | orchestrator | testbed-node-1 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-02-10 09:25:33.964703 | orchestrator | testbed-node-2 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-02-10 09:25:33.964713 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:25:33.964723 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:25:33.964799 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:25:33.964809 | orchestrator | 2025-02-10 09:25:33.964818 | orchestrator | 2025-02-10 09:25:33.964826 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:25:33.964835 | orchestrator | Monday 10 February 2025 09:25:28 +0000 (0:00:01.498) 0:02:49.370 ******* 2025-02-10 09:25:33.964843 | orchestrator | =============================================================================== 2025-02-10 09:25:33.964852 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 24.79s 2025-02-10 09:25:33.964861 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 16.44s 2025-02-10 09:25:33.964869 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.15s 2025-02-10 09:25:33.964895 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 10.35s 2025-02-10 09:25:33.964904 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 9.25s 2025-02-10 09:25:33.964912 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 6.10s 2025-02-10 09:25:33.964920 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.65s 2025-02-10 09:25:33.964929 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 4.61s 2025-02-10 09:25:33.964937 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.71s 2025-02-10 09:25:33.964946 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.28s 2025-02-10 09:25:33.964954 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.98s 2025-02-10 09:25:33.964962 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.98s 2025-02-10 09:25:33.964971 | orchestrator | ovn-db : include_tasks -------------------------------------------------- 2.60s 2025-02-10 09:25:33.964978 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.54s 2025-02-10 09:25:33.964987 | orchestrator | ovn-db : include_tasks -------------------------------------------------- 2.49s 2025-02-10 09:25:33.964996 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 2.41s 2025-02-10 09:25:33.965004 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 2.00s 2025-02-10 09:25:33.965012 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.00s 2025-02-10 09:25:33.965020 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.97s 2025-02-10 09:25:33.965028 | orchestrator | ovn-db : Check OVN SB service port liveness ----------------------------- 1.82s 2025-02-10 09:25:33.965038 | orchestrator | 2025-02-10 09:25:30 | INFO  | Task 9815f12d-7e73-4884-b7b4-1dde02616d49 is in state SUCCESS 2025-02-10 09:25:33.965046 | orchestrator | 2025-02-10 09:25:30 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:33.965080 | orchestrator | 2025-02-10 09:25:33 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:25:37.020541 | orchestrator | 2025-02-10 09:25:33 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:25:37.020676 | orchestrator | 2025-02-10 09:25:33 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:37.020704 | orchestrator | 2025-02-10 09:25:37 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:25:40.074933 | orchestrator | 2025-02-10 09:25:37 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:25:40.075056 | orchestrator | 2025-02-10 09:25:37 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:40.075088 | orchestrator | 2025-02-10 09:25:40 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:25:40.075925 | orchestrator | 2025-02-10 09:25:40 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:25:40.076549 | orchestrator | 2025-02-10 09:25:40 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:43.130536 | orchestrator | 2025-02-10 09:25:43 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:25:43.132431 | orchestrator | 2025-02-10 09:25:43 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:25:46.198231 | orchestrator | 2025-02-10 09:25:43 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:46.198386 | orchestrator | 2025-02-10 09:25:46 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:25:49.237281 | orchestrator | 2025-02-10 09:25:46 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:25:49.237387 | orchestrator | 2025-02-10 09:25:46 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:49.237409 | orchestrator | 2025-02-10 09:25:49 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:25:52.314466 | orchestrator | 2025-02-10 09:25:49 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:25:52.314610 | orchestrator | 2025-02-10 09:25:49 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:52.314646 | orchestrator | 2025-02-10 09:25:52 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:25:55.364448 | orchestrator | 2025-02-10 09:25:52 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:25:55.364559 | orchestrator | 2025-02-10 09:25:52 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:55.364588 | orchestrator | 2025-02-10 09:25:55 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:25:58.406501 | orchestrator | 2025-02-10 09:25:55 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:25:58.406625 | orchestrator | 2025-02-10 09:25:55 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:58.406651 | orchestrator | 2025-02-10 09:25:58 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:25:58.407311 | orchestrator | 2025-02-10 09:25:58 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:26:01.480039 | orchestrator | 2025-02-10 09:25:58 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:01.480192 | orchestrator | 2025-02-10 09:26:01 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:26:04.518292 | orchestrator | 2025-02-10 09:26:01 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:26:04.518419 | orchestrator | 2025-02-10 09:26:01 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:04.518454 | orchestrator | 2025-02-10 09:26:04 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:26:04.518932 | orchestrator | 2025-02-10 09:26:04 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:26:04.519188 | orchestrator | 2025-02-10 09:26:04 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:07.580689 | orchestrator | 2025-02-10 09:26:07 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:26:10.632981 | orchestrator | 2025-02-10 09:26:07 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:26:10.633123 | orchestrator | 2025-02-10 09:26:07 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:10.633215 | orchestrator | 2025-02-10 09:26:10 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:26:10.633728 | orchestrator | 2025-02-10 09:26:10 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:26:10.634304 | orchestrator | 2025-02-10 09:26:10 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:13.689096 | orchestrator | 2025-02-10 09:26:13 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:26:13.689811 | orchestrator | 2025-02-10 09:26:13 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:26:13.690009 | orchestrator | 2025-02-10 09:26:13 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:16.732496 | orchestrator | 2025-02-10 09:26:16 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:26:19.788363 | orchestrator | 2025-02-10 09:26:16 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:26:19.788510 | orchestrator | 2025-02-10 09:26:16 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:19.788554 | orchestrator | 2025-02-10 09:26:19 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:26:22.834108 | orchestrator | 2025-02-10 09:26:19 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:26:22.834266 | orchestrator | 2025-02-10 09:26:19 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:22.834316 | orchestrator | 2025-02-10 09:26:22 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:26:22.836046 | orchestrator | 2025-02-10 09:26:22 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:26:25.885842 | orchestrator | 2025-02-10 09:26:22 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:25.885954 | orchestrator | 2025-02-10 09:26:25 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:26:25.889562 | orchestrator | 2025-02-10 09:26:25 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:26:28.942362 | orchestrator | 2025-02-10 09:26:25 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:28.942524 | orchestrator | 2025-02-10 09:26:28 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:26:31.990156 | orchestrator | 2025-02-10 09:26:28 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:26:31.990297 | orchestrator | 2025-02-10 09:26:28 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:31.990336 | orchestrator | 2025-02-10 09:26:31 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:26:31.990773 | orchestrator | 2025-02-10 09:26:31 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:26:31.990846 | orchestrator | 2025-02-10 09:26:31 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:35.063161 | orchestrator | 2025-02-10 09:26:35 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:26:38.122431 | orchestrator | 2025-02-10 09:26:35 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:26:38.122582 | orchestrator | 2025-02-10 09:26:35 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:38.122620 | orchestrator | 2025-02-10 09:26:38 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:26:38.123385 | orchestrator | 2025-02-10 09:26:38 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:26:41.175258 | orchestrator | 2025-02-10 09:26:38 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:41.175373 | orchestrator | 2025-02-10 09:26:41 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:26:44.225428 | orchestrator | 2025-02-10 09:26:41 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:26:44.225570 | orchestrator | 2025-02-10 09:26:41 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:44.225609 | orchestrator | 2025-02-10 09:26:44 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:26:44.226345 | orchestrator | 2025-02-10 09:26:44 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:26:44.226726 | orchestrator | 2025-02-10 09:26:44 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:47.283645 | orchestrator | 2025-02-10 09:26:47 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:26:47.286518 | orchestrator | 2025-02-10 09:26:47 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:26:50.345459 | orchestrator | 2025-02-10 09:26:47 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:50.345622 | orchestrator | 2025-02-10 09:26:50 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:26:53.399579 | orchestrator | 2025-02-10 09:26:50 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:26:53.399722 | orchestrator | 2025-02-10 09:26:50 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:53.399762 | orchestrator | 2025-02-10 09:26:53 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:26:53.400134 | orchestrator | 2025-02-10 09:26:53 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:26:56.452679 | orchestrator | 2025-02-10 09:26:53 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:56.452877 | orchestrator | 2025-02-10 09:26:56 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:26:56.455917 | orchestrator | 2025-02-10 09:26:56 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:26:59.507195 | orchestrator | 2025-02-10 09:26:56 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:59.507342 | orchestrator | 2025-02-10 09:26:59 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:26:59.508174 | orchestrator | 2025-02-10 09:26:59 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:27:02.556885 | orchestrator | 2025-02-10 09:26:59 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:27:02.557054 | orchestrator | 2025-02-10 09:27:02 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:27:02.558171 | orchestrator | 2025-02-10 09:27:02 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:27:05.626712 | orchestrator | 2025-02-10 09:27:02 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:27:05.626948 | orchestrator | 2025-02-10 09:27:05 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:27:05.627265 | orchestrator | 2025-02-10 09:27:05 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:27:05.627310 | orchestrator | 2025-02-10 09:27:05 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:27:08.666636 | orchestrator | 2025-02-10 09:27:08 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:27:11.734334 | orchestrator | 2025-02-10 09:27:08 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:27:11.734459 | orchestrator | 2025-02-10 09:27:08 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:27:11.734495 | orchestrator | 2025-02-10 09:27:11 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:27:14.781526 | orchestrator | 2025-02-10 09:27:11 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:27:14.781663 | orchestrator | 2025-02-10 09:27:11 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:27:14.781713 | orchestrator | 2025-02-10 09:27:14 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:27:17.834443 | orchestrator | 2025-02-10 09:27:14 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:27:17.834584 | orchestrator | 2025-02-10 09:27:14 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:27:17.834624 | orchestrator | 2025-02-10 09:27:17 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:27:17.835918 | orchestrator | 2025-02-10 09:27:17 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:27:20.880278 | orchestrator | 2025-02-10 09:27:17 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:27:20.880398 | orchestrator | 2025-02-10 09:27:20 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:27:20.882994 | orchestrator | 2025-02-10 09:27:20 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:27:20.884090 | orchestrator | 2025-02-10 09:27:20 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:27:23.951663 | orchestrator | 2025-02-10 09:27:23 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:27:23.953739 | orchestrator | 2025-02-10 09:27:23 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:27:27.008363 | orchestrator | 2025-02-10 09:27:23 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:27:27.008612 | orchestrator | 2025-02-10 09:27:27 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:27:27.008899 | orchestrator | 2025-02-10 09:27:27 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:27:27.009102 | orchestrator | 2025-02-10 09:27:27 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:27:30.070282 | orchestrator | 2025-02-10 09:27:30 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:27:30.070508 | orchestrator | 2025-02-10 09:27:30 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:27:30.070537 | orchestrator | 2025-02-10 09:27:30 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:27:33.117315 | orchestrator | 2025-02-10 09:27:33 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:27:33.117733 | orchestrator | 2025-02-10 09:27:33 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:27:36.168635 | orchestrator | 2025-02-10 09:27:33 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:27:36.168773 | orchestrator | 2025-02-10 09:27:36 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:27:39.214503 | orchestrator | 2025-02-10 09:27:36 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:27:39.214641 | orchestrator | 2025-02-10 09:27:36 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:27:39.214694 | orchestrator | 2025-02-10 09:27:39 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:27:42.287210 | orchestrator | 2025-02-10 09:27:39 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:27:42.287388 | orchestrator | 2025-02-10 09:27:39 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:27:42.287434 | orchestrator | 2025-02-10 09:27:42 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:27:42.291133 | orchestrator | 2025-02-10 09:27:42 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:27:45.342963 | orchestrator | 2025-02-10 09:27:42 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:27:45.343129 | orchestrator | 2025-02-10 09:27:45 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:27:45.343388 | orchestrator | 2025-02-10 09:27:45 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:27:45.343639 | orchestrator | 2025-02-10 09:27:45 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:27:48.396115 | orchestrator | 2025-02-10 09:27:48 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:27:48.397399 | orchestrator | 2025-02-10 09:27:48 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:27:51.448034 | orchestrator | 2025-02-10 09:27:48 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:27:51.448210 | orchestrator | 2025-02-10 09:27:51 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:27:51.448859 | orchestrator | 2025-02-10 09:27:51 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:27:54.507111 | orchestrator | 2025-02-10 09:27:51 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:27:54.507268 | orchestrator | 2025-02-10 09:27:54 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:27:54.508864 | orchestrator | 2025-02-10 09:27:54 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:27:54.510503 | orchestrator | 2025-02-10 09:27:54 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:27:57.567136 | orchestrator | 2025-02-10 09:27:57 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:27:57.569371 | orchestrator | 2025-02-10 09:27:57 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:28:00.626670 | orchestrator | 2025-02-10 09:27:57 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:00.626809 | orchestrator | 2025-02-10 09:28:00 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:28:00.628016 | orchestrator | 2025-02-10 09:28:00 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:28:03.673268 | orchestrator | 2025-02-10 09:28:00 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:03.673427 | orchestrator | 2025-02-10 09:28:03 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:28:03.674100 | orchestrator | 2025-02-10 09:28:03 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:28:06.717194 | orchestrator | 2025-02-10 09:28:03 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:06.717331 | orchestrator | 2025-02-10 09:28:06 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:28:06.718448 | orchestrator | 2025-02-10 09:28:06 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:28:09.770605 | orchestrator | 2025-02-10 09:28:06 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:09.770779 | orchestrator | 2025-02-10 09:28:09 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:28:09.772285 | orchestrator | 2025-02-10 09:28:09 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:28:12.821646 | orchestrator | 2025-02-10 09:28:09 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:12.821804 | orchestrator | 2025-02-10 09:28:12 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:28:15.863658 | orchestrator | 2025-02-10 09:28:12 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:28:15.863798 | orchestrator | 2025-02-10 09:28:12 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:15.863897 | orchestrator | 2025-02-10 09:28:15 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:28:15.867443 | orchestrator | 2025-02-10 09:28:15 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:28:18.909386 | orchestrator | 2025-02-10 09:28:15 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:18.909521 | orchestrator | 2025-02-10 09:28:18 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:28:18.910097 | orchestrator | 2025-02-10 09:28:18 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:28:18.910614 | orchestrator | 2025-02-10 09:28:18 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:21.955897 | orchestrator | 2025-02-10 09:28:21 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:28:24.992273 | orchestrator | 2025-02-10 09:28:21 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:28:24.992391 | orchestrator | 2025-02-10 09:28:21 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:24.992413 | orchestrator | 2025-02-10 09:28:24 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:28:24.993098 | orchestrator | 2025-02-10 09:28:24 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:28:28.038456 | orchestrator | 2025-02-10 09:28:24 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:28.038664 | orchestrator | 2025-02-10 09:28:28 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:28:31.083758 | orchestrator | 2025-02-10 09:28:28 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:28:31.083918 | orchestrator | 2025-02-10 09:28:28 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:31.083949 | orchestrator | 2025-02-10 09:28:31 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:28:34.138762 | orchestrator | 2025-02-10 09:28:31 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:28:34.139016 | orchestrator | 2025-02-10 09:28:31 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:34.139080 | orchestrator | 2025-02-10 09:28:34 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:28:34.144046 | orchestrator | 2025-02-10 09:28:34 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:28:34.144667 | orchestrator | 2025-02-10 09:28:34 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:37.200399 | orchestrator | 2025-02-10 09:28:37 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:28:40.249336 | orchestrator | 2025-02-10 09:28:37 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:28:40.249496 | orchestrator | 2025-02-10 09:28:37 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:40.249530 | orchestrator | 2025-02-10 09:28:40 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:28:40.249949 | orchestrator | 2025-02-10 09:28:40 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:28:43.311986 | orchestrator | 2025-02-10 09:28:40 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:43.312151 | orchestrator | 2025-02-10 09:28:43 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:28:43.312914 | orchestrator | 2025-02-10 09:28:43 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:28:46.380153 | orchestrator | 2025-02-10 09:28:43 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:46.380318 | orchestrator | 2025-02-10 09:28:46 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:28:46.380805 | orchestrator | 2025-02-10 09:28:46 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:28:46.382725 | orchestrator | 2025-02-10 09:28:46 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:49.426575 | orchestrator | 2025-02-10 09:28:49 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:28:52.466807 | orchestrator | 2025-02-10 09:28:49 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:28:52.466959 | orchestrator | 2025-02-10 09:28:49 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:52.466985 | orchestrator | 2025-02-10 09:28:52 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:28:52.467134 | orchestrator | 2025-02-10 09:28:52 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:28:55.509786 | orchestrator | 2025-02-10 09:28:52 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:55.509923 | orchestrator | 2025-02-10 09:28:55 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:28:55.510496 | orchestrator | 2025-02-10 09:28:55 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:28:58.563984 | orchestrator | 2025-02-10 09:28:55 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:58.564128 | orchestrator | 2025-02-10 09:28:58 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:29:01.605277 | orchestrator | 2025-02-10 09:28:58 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:29:01.605430 | orchestrator | 2025-02-10 09:28:58 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:01.605474 | orchestrator | 2025-02-10 09:29:01 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:29:04.650291 | orchestrator | 2025-02-10 09:29:01 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:29:04.650423 | orchestrator | 2025-02-10 09:29:01 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:04.650460 | orchestrator | 2025-02-10 09:29:04 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:29:04.651719 | orchestrator | 2025-02-10 09:29:04 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:29:07.713804 | orchestrator | 2025-02-10 09:29:04 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:07.713950 | orchestrator | 2025-02-10 09:29:07 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:29:10.764293 | orchestrator | 2025-02-10 09:29:07 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:29:10.764442 | orchestrator | 2025-02-10 09:29:07 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:10.764507 | orchestrator | 2025-02-10 09:29:10 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:29:13.816067 | orchestrator | 2025-02-10 09:29:10 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:29:13.816344 | orchestrator | 2025-02-10 09:29:10 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:13.816391 | orchestrator | 2025-02-10 09:29:13 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:29:16.870796 | orchestrator | 2025-02-10 09:29:13 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:29:16.870991 | orchestrator | 2025-02-10 09:29:13 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:16.871035 | orchestrator | 2025-02-10 09:29:16 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:29:19.919098 | orchestrator | 2025-02-10 09:29:16 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:29:19.919231 | orchestrator | 2025-02-10 09:29:16 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:19.919266 | orchestrator | 2025-02-10 09:29:19 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:29:19.919783 | orchestrator | 2025-02-10 09:29:19 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:29:22.974245 | orchestrator | 2025-02-10 09:29:19 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:22.974423 | orchestrator | 2025-02-10 09:29:22 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state STARTED 2025-02-10 09:29:26.043359 | orchestrator | 2025-02-10 09:29:22 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:29:26.043478 | orchestrator | 2025-02-10 09:29:22 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:26.043509 | orchestrator | 2025-02-10 09:29:26 | INFO  | Task ed4301b9-fc21-4732-b103-f0e045a3a493 is in state SUCCESS 2025-02-10 09:29:26.045267 | orchestrator | 2025-02-10 09:29:26.045304 | orchestrator | 2025-02-10 09:29:26.045317 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:29:26.045330 | orchestrator | 2025-02-10 09:29:26.045342 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:29:26.045355 | orchestrator | Monday 10 February 2025 09:21:01 +0000 (0:00:00.425) 0:00:00.425 ******* 2025-02-10 09:29:26.045423 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:29:26.045441 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:29:26.045454 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:29:26.045466 | orchestrator | 2025-02-10 09:29:26.045500 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:29:26.045514 | orchestrator | Monday 10 February 2025 09:21:01 +0000 (0:00:00.530) 0:00:00.956 ******* 2025-02-10 09:29:26.045527 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-02-10 09:29:26.045980 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-02-10 09:29:26.045995 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-02-10 09:29:26.046008 | orchestrator | 2025-02-10 09:29:26.046075 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-02-10 09:29:26.046089 | orchestrator | 2025-02-10 09:29:26.046102 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-02-10 09:29:26.046114 | orchestrator | Monday 10 February 2025 09:21:02 +0000 (0:00:00.750) 0:00:01.706 ******* 2025-02-10 09:29:26.046158 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:29:26.046171 | orchestrator | 2025-02-10 09:29:26.046184 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-02-10 09:29:26.046196 | orchestrator | Monday 10 February 2025 09:21:03 +0000 (0:00:01.516) 0:00:03.222 ******* 2025-02-10 09:29:26.046208 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:29:26.046222 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:29:26.046234 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:29:26.046247 | orchestrator | 2025-02-10 09:29:26.046259 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-02-10 09:29:26.046272 | orchestrator | Monday 10 February 2025 09:21:05 +0000 (0:00:01.123) 0:00:04.346 ******* 2025-02-10 09:29:26.046285 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:29:26.046317 | orchestrator | 2025-02-10 09:29:26.046329 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-02-10 09:29:26.046342 | orchestrator | Monday 10 February 2025 09:21:06 +0000 (0:00:01.890) 0:00:06.237 ******* 2025-02-10 09:29:26.046354 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:29:26.046366 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:29:26.046379 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:29:26.046391 | orchestrator | 2025-02-10 09:29:26.046404 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-02-10 09:29:26.046416 | orchestrator | Monday 10 February 2025 09:21:09 +0000 (0:00:02.680) 0:00:08.917 ******* 2025-02-10 09:29:26.046437 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-02-10 09:29:26.046450 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-02-10 09:29:26.046463 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-02-10 09:29:26.046475 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-02-10 09:29:26.046488 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-02-10 09:29:26.046501 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-02-10 09:29:26.046514 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-02-10 09:29:26.046530 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-02-10 09:29:26.046543 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-02-10 09:29:26.046555 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-02-10 09:29:26.046567 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-02-10 09:29:26.046579 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-02-10 09:29:26.046592 | orchestrator | 2025-02-10 09:29:26.046604 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-02-10 09:29:26.046616 | orchestrator | Monday 10 February 2025 09:21:15 +0000 (0:00:05.637) 0:00:14.554 ******* 2025-02-10 09:29:26.046628 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-02-10 09:29:26.046641 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-02-10 09:29:26.046654 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-02-10 09:29:26.046666 | orchestrator | 2025-02-10 09:29:26.046678 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-02-10 09:29:26.046691 | orchestrator | Monday 10 February 2025 09:21:16 +0000 (0:00:01.278) 0:00:15.832 ******* 2025-02-10 09:29:26.046703 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-02-10 09:29:26.046715 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-02-10 09:29:26.046737 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-02-10 09:29:26.046749 | orchestrator | 2025-02-10 09:29:26.046762 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-02-10 09:29:26.046774 | orchestrator | Monday 10 February 2025 09:21:18 +0000 (0:00:01.893) 0:00:17.726 ******* 2025-02-10 09:29:26.046786 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-02-10 09:29:26.046799 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.046822 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-02-10 09:29:26.046835 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.046847 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-02-10 09:29:26.046880 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.046893 | orchestrator | 2025-02-10 09:29:26.047052 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-02-10 09:29:26.047065 | orchestrator | Monday 10 February 2025 09:21:19 +0000 (0:00:00.853) 0:00:18.580 ******* 2025-02-10 09:29:26.047081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-02-10 09:29:26.047098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-10 09:29:26.047113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-02-10 09:29:26.047127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-10 09:29:26.047141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-02-10 09:29:26.047175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-10 09:29:26.047191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-10 09:29:26.047207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-10 09:29:26.047221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b52b90a4da4619afd4fe3a448e9f2677d757d966', '__omit_place_holder__b52b90a4da4619afd4fe3a448e9f2677d757d966'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-10 09:29:26.047236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b52b90a4da4619afd4fe3a448e9f2677d757d966', '__omit_place_holder__b52b90a4da4619afd4fe3a448e9f2677d757d966'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-10 09:29:26.047250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-10 09:29:26.047271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b52b90a4da4619afd4fe3a448e9f2677d757d966', '__omit_place_holder__b52b90a4da4619afd4fe3a448e9f2677d757d966'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-10 09:29:26.047285 | orchestrator | 2025-02-10 09:29:26.047298 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-02-10 09:29:26.047311 | orchestrator | Monday 10 February 2025 09:21:21 +0000 (0:00:02.477) 0:00:21.057 ******* 2025-02-10 09:29:26.047324 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:29:26.047343 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:29:26.047355 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:29:26.047368 | orchestrator | 2025-02-10 09:29:26.047387 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-02-10 09:29:26.047400 | orchestrator | Monday 10 February 2025 09:21:24 +0000 (0:00:03.126) 0:00:24.184 ******* 2025-02-10 09:29:26.047413 | orchestrator | skipping: [testbed-node-0] => (item=users)  2025-02-10 09:29:26.047425 | orchestrator | skipping: [testbed-node-1] => (item=users)  2025-02-10 09:29:26.047438 | orchestrator | skipping: [testbed-node-2] => (item=users)  2025-02-10 09:29:26.047451 | orchestrator | skipping: [testbed-node-1] => (item=rules)  2025-02-10 09:29:26.047463 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.047476 | orchestrator | skipping: [testbed-node-0] => (item=rules)  2025-02-10 09:29:26.047488 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.047500 | orchestrator | skipping: [testbed-node-2] => (item=rules)  2025-02-10 09:29:26.047513 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.048804 | orchestrator | 2025-02-10 09:29:26.048828 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-02-10 09:29:26.048842 | orchestrator | Monday 10 February 2025 09:21:29 +0000 (0:00:04.298) 0:00:28.482 ******* 2025-02-10 09:29:26.048878 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:29:26.048891 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:29:26.048904 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:29:26.048917 | orchestrator | 2025-02-10 09:29:26.048930 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-02-10 09:29:26.048943 | orchestrator | Monday 10 February 2025 09:21:31 +0000 (0:00:01.882) 0:00:30.365 ******* 2025-02-10 09:29:26.048956 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.049387 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.049409 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.049422 | orchestrator | 2025-02-10 09:29:26.049435 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-02-10 09:29:26.049447 | orchestrator | Monday 10 February 2025 09:21:33 +0000 (0:00:02.125) 0:00:32.490 ******* 2025-02-10 09:29:26.049461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-02-10 09:29:26.049476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-02-10 09:29:26.049502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-02-10 09:29:26.049515 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-02-10 09:29:26.049560 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-02-10 09:29:26.049576 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-02-10 09:29:26.049589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-10 09:29:26.049603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-10 09:29:26.049623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b52b90a4da4619afd4fe3a448e9f2677d757d966', '__omit_place_holder__b52b90a4da4619afd4fe3a448e9f2677d757d966'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-10 09:29:26.049637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-10 09:29:26.049650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b52b90a4da4619afd4fe3a448e9f2677d757d966', '__omit_place_holder__b52b90a4da4619afd4fe3a448e9f2677d757d966'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-10 09:29:26.050242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b52b90a4da4619afd4fe3a448e9f2677d757d966', '__omit_place_holder__b52b90a4da4619afd4fe3a448e9f2677d757d966'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-10 09:29:26.050277 | orchestrator | 2025-02-10 09:29:26.050291 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-02-10 09:29:26.050303 | orchestrator | Monday 10 February 2025 09:21:37 +0000 (0:00:03.783) 0:00:36.274 ******* 2025-02-10 09:29:26.050316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-02-10 09:29:26.050329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-02-10 09:29:26.050352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-10 09:29:26.050365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-10 09:29:26.050591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-02-10 09:29:26.050610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-10 09:29:26.050622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-10 09:29:26.050634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b52b90a4da4619afd4fe3a448e9f2677d757d966', '__omit_place_holder__b52b90a4da4619afd4fe3a448e9f2677d757d966'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-10 09:29:26.050654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-10 09:29:26.050666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b52b90a4da4619afd4fe3a448e9f2677d757d966', '__omit_place_holder__b52b90a4da4619afd4fe3a448e9f2677d757d966'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-10 09:29:26.050678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-10 09:29:26.050741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b52b90a4da4619afd4fe3a448e9f2677d757d966', '__omit_place_holder__b52b90a4da4619afd4fe3a448e9f2677d757d966'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-10 09:29:26.050757 | orchestrator | 2025-02-10 09:29:26.050769 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-02-10 09:29:26.050780 | orchestrator | Monday 10 February 2025 09:21:45 +0000 (0:00:08.025) 0:00:44.299 ******* 2025-02-10 09:29:26.050797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-02-10 09:29:26.050809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-10 09:29:26.050827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-02-10 09:29:26.050839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-10 09:29:26.050869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-02-10 09:29:26.050904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-10 09:29:26.050922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-10 09:29:26.050933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b52b90a4da4619afd4fe3a448e9f2677d757d966', '__omit_place_holder__b52b90a4da4619afd4fe3a448e9f2677d757d966'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-10 09:29:26.050951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-10 09:29:26.050962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b52b90a4da4619afd4fe3a448e9f2677d757d966', '__omit_place_holder__b52b90a4da4619afd4fe3a448e9f2677d757d966'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-10 09:29:26.050974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-10 09:29:26.050985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b52b90a4da4619afd4fe3a448e9f2677d757d966', '__omit_place_holder__b52b90a4da4619afd4fe3a448e9f2677d757d966'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-10 09:29:26.050997 | orchestrator | 2025-02-10 09:29:26.051008 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-02-10 09:29:26.051019 | orchestrator | Monday 10 February 2025 09:21:48 +0000 (0:00:03.182) 0:00:47.481 ******* 2025-02-10 09:29:26.051050 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-02-10 09:29:26.051064 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-02-10 09:29:26.051075 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-02-10 09:29:26.051085 | orchestrator | 2025-02-10 09:29:26.051096 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-02-10 09:29:26.051106 | orchestrator | Monday 10 February 2025 09:21:52 +0000 (0:00:04.437) 0:00:51.919 ******* 2025-02-10 09:29:26.051117 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2)  2025-02-10 09:29:26.051133 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.051143 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2)  2025-02-10 09:29:26.051154 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.052310 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2)  2025-02-10 09:29:26.052332 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.052343 | orchestrator | 2025-02-10 09:29:26.052354 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-02-10 09:29:26.052365 | orchestrator | Monday 10 February 2025 09:21:56 +0000 (0:00:03.714) 0:00:55.634 ******* 2025-02-10 09:29:26.052376 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.052386 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.052397 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.052407 | orchestrator | 2025-02-10 09:29:26.052418 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-02-10 09:29:26.052428 | orchestrator | Monday 10 February 2025 09:21:59 +0000 (0:00:03.330) 0:00:58.964 ******* 2025-02-10 09:29:26.052439 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-02-10 09:29:26.052451 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-02-10 09:29:26.052461 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-02-10 09:29:26.052472 | orchestrator | 2025-02-10 09:29:26.052487 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-02-10 09:29:26.052498 | orchestrator | Monday 10 February 2025 09:22:05 +0000 (0:00:05.448) 0:01:04.413 ******* 2025-02-10 09:29:26.052509 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-02-10 09:29:26.052520 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-02-10 09:29:26.052530 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-02-10 09:29:26.052541 | orchestrator | 2025-02-10 09:29:26.052552 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-02-10 09:29:26.053074 | orchestrator | Monday 10 February 2025 09:22:11 +0000 (0:00:06.257) 0:01:10.671 ******* 2025-02-10 09:29:26.053088 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-02-10 09:29:26.053368 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-02-10 09:29:26.053384 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-02-10 09:29:26.053395 | orchestrator | 2025-02-10 09:29:26.053406 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-02-10 09:29:26.053417 | orchestrator | Monday 10 February 2025 09:22:15 +0000 (0:00:04.292) 0:01:14.963 ******* 2025-02-10 09:29:26.053427 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-02-10 09:29:26.053438 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-02-10 09:29:26.053449 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-02-10 09:29:26.053458 | orchestrator | 2025-02-10 09:29:26.053466 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-02-10 09:29:26.053475 | orchestrator | Monday 10 February 2025 09:22:19 +0000 (0:00:03.605) 0:01:18.569 ******* 2025-02-10 09:29:26.053484 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:29:26.053493 | orchestrator | 2025-02-10 09:29:26.053502 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-02-10 09:29:26.053511 | orchestrator | Monday 10 February 2025 09:22:20 +0000 (0:00:01.481) 0:01:20.051 ******* 2025-02-10 09:29:26.053536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-02-10 09:29:26.053609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-02-10 09:29:26.053623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-02-10 09:29:26.053632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-10 09:29:26.053641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-10 09:29:26.053650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-10 09:29:26.053659 | orchestrator | 2025-02-10 09:29:26.053668 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-02-10 09:29:26.053677 | orchestrator | Monday 10 February 2025 09:22:24 +0000 (0:00:03.302) 0:01:23.355 ******* 2025-02-10 09:29:26.053797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-02-10 09:29:26.054104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-10 09:29:26.054132 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.054142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-02-10 09:29:26.054152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-10 09:29:26.054160 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.054169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-02-10 09:29:26.054178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-10 09:29:26.054187 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.054196 | orchestrator | 2025-02-10 09:29:26.054205 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-02-10 09:29:26.054214 | orchestrator | Monday 10 February 2025 09:22:27 +0000 (0:00:03.617) 0:01:26.972 ******* 2025-02-10 09:29:26.054222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-02-10 09:29:26.054239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-10 09:29:26.054335 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.055166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-02-10 09:29:26.055200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-10 09:29:26.055211 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.055220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-02-10 09:29:26.055229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-10 09:29:26.055238 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.055247 | orchestrator | 2025-02-10 09:29:26.055256 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-02-10 09:29:26.055265 | orchestrator | Monday 10 February 2025 09:22:29 +0000 (0:00:01.335) 0:01:28.308 ******* 2025-02-10 09:29:26.055273 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-02-10 09:29:26.055283 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-02-10 09:29:26.055299 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-02-10 09:29:26.055308 | orchestrator | 2025-02-10 09:29:26.055316 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-02-10 09:29:26.055325 | orchestrator | Monday 10 February 2025 09:22:31 +0000 (0:00:02.023) 0:01:30.332 ******* 2025-02-10 09:29:26.055335 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2)  2025-02-10 09:29:26.055343 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.055352 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2)  2025-02-10 09:29:26.055361 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.055370 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2)  2025-02-10 09:29:26.055378 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.055387 | orchestrator | 2025-02-10 09:29:26.055395 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-02-10 09:29:26.055404 | orchestrator | Monday 10 February 2025 09:22:32 +0000 (0:00:01.645) 0:01:31.977 ******* 2025-02-10 09:29:26.055412 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-02-10 09:29:26.055421 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-02-10 09:29:26.055429 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-02-10 09:29:26.055438 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-02-10 09:29:26.055446 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.055455 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-02-10 09:29:26.055463 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.055472 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-02-10 09:29:26.055538 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.055550 | orchestrator | 2025-02-10 09:29:26.055559 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-02-10 09:29:26.055568 | orchestrator | Monday 10 February 2025 09:22:35 +0000 (0:00:03.082) 0:01:35.060 ******* 2025-02-10 09:29:26.055577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-02-10 09:29:26.055586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-02-10 09:29:26.055600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-02-10 09:29:26.055614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-10 09:29:26.055623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-10 09:29:26.055676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-10 09:29:26.055688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-10 09:29:26.058243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b52b90a4da4619afd4fe3a448e9f2677d757d966', '__omit_place_holder__b52b90a4da4619afd4fe3a448e9f2677d757d966'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-10 09:29:26.058317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-10 09:29:26.058376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b52b90a4da4619afd4fe3a448e9f2677d757d966', '__omit_place_holder__b52b90a4da4619afd4fe3a448e9f2677d757d966'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-10 09:29:26.058403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-10 09:29:26.058429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b52b90a4da4619afd4fe3a448e9f2677d757d966', '__omit_place_holder__b52b90a4da4619afd4fe3a448e9f2677d757d966'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-10 09:29:26.058461 | orchestrator | 2025-02-10 09:29:26.058487 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-02-10 09:29:26.058512 | orchestrator | Monday 10 February 2025 09:22:38 +0000 (0:00:03.058) 0:01:38.118 ******* 2025-02-10 09:29:26.058536 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:29:26.058557 | orchestrator | 2025-02-10 09:29:26.058581 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-02-10 09:29:26.058614 | orchestrator | Monday 10 February 2025 09:22:39 +0000 (0:00:01.001) 0:01:39.119 ******* 2025-02-10 09:29:26.058664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-02-10 09:29:26.058743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-02-10 09:29:26.058789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.058812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.058837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-02-10 09:29:26.058914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-02-10 09:29:26.058940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.058984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.059030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-02-10 09:29:26.059071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-02-10 09:29:26.059097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.059123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.059149 | orchestrator | 2025-02-10 09:29:26.059173 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-02-10 09:29:26.059207 | orchestrator | Monday 10 February 2025 09:22:48 +0000 (0:00:08.537) 0:01:47.657 ******* 2025-02-10 09:29:26.059265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-02-10 09:29:26.059312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-02-10 09:29:26.059341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.059365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.059387 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.059429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-02-10 09:29:26.059467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-02-10 09:29:26.059491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.059530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.059556 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.059594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-02-10 09:29:26.059637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-02-10 09:29:26.059663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.059686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.059710 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.059734 | orchestrator | 2025-02-10 09:29:26.059759 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-02-10 09:29:26.059813 | orchestrator | Monday 10 February 2025 09:22:50 +0000 (0:00:01.619) 0:01:49.277 ******* 2025-02-10 09:29:26.059840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-02-10 09:29:26.059970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-02-10 09:29:26.060000 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.060024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-02-10 09:29:26.060049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-02-10 09:29:26.060074 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.060093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-02-10 09:29:26.060108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-02-10 09:29:26.060122 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.060136 | orchestrator | 2025-02-10 09:29:26.060150 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-02-10 09:29:26.060164 | orchestrator | Monday 10 February 2025 09:22:51 +0000 (0:00:01.674) 0:01:50.951 ******* 2025-02-10 09:29:26.060178 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.060192 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.060206 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.060220 | orchestrator | 2025-02-10 09:29:26.060243 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-02-10 09:29:26.060257 | orchestrator | Monday 10 February 2025 09:22:52 +0000 (0:00:00.472) 0:01:51.424 ******* 2025-02-10 09:29:26.060271 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.060285 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.060299 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.060313 | orchestrator | 2025-02-10 09:29:26.060327 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-02-10 09:29:26.060341 | orchestrator | Monday 10 February 2025 09:22:53 +0000 (0:00:01.435) 0:01:52.859 ******* 2025-02-10 09:29:26.060355 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:29:26.060369 | orchestrator | 2025-02-10 09:29:26.060382 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-02-10 09:29:26.060396 | orchestrator | Monday 10 February 2025 09:22:54 +0000 (0:00:00.813) 0:01:53.673 ******* 2025-02-10 09:29:26.060411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-10 09:29:26.060427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.060490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.060509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-10 09:29:26.060524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.060539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.060554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-10 09:29:26.060599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.060615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.060629 | orchestrator | 2025-02-10 09:29:26.060643 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-02-10 09:29:26.060657 | orchestrator | Monday 10 February 2025 09:22:58 +0000 (0:00:04.396) 0:01:58.070 ******* 2025-02-10 09:29:26.060671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-02-10 09:29:26.060686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.060700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-02-10 09:29:26.060740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.060756 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.060770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.060785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.060799 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.060813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-02-10 09:29:26.060836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.060903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.060930 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.060954 | orchestrator | 2025-02-10 09:29:26.060972 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-02-10 09:29:26.060985 | orchestrator | Monday 10 February 2025 09:23:00 +0000 (0:00:02.111) 0:02:00.183 ******* 2025-02-10 09:29:26.061000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-02-10 09:29:26.061014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-02-10 09:29:26.061030 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.061044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-02-10 09:29:26.061065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-02-10 09:29:26.061080 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.061094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-02-10 09:29:26.061108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-02-10 09:29:26.061122 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.061136 | orchestrator | 2025-02-10 09:29:26.061150 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-02-10 09:29:26.061163 | orchestrator | Monday 10 February 2025 09:23:03 +0000 (0:00:02.089) 0:02:02.272 ******* 2025-02-10 09:29:26.061177 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.061191 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.061204 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.061218 | orchestrator | 2025-02-10 09:29:26.061232 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-02-10 09:29:26.061254 | orchestrator | Monday 10 February 2025 09:23:03 +0000 (0:00:00.510) 0:02:02.782 ******* 2025-02-10 09:29:26.061268 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.061287 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.061301 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.061315 | orchestrator | 2025-02-10 09:29:26.061329 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-02-10 09:29:26.061343 | orchestrator | Monday 10 February 2025 09:23:05 +0000 (0:00:01.557) 0:02:04.339 ******* 2025-02-10 09:29:26.061357 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.061370 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.061384 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.061397 | orchestrator | 2025-02-10 09:29:26.061411 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-02-10 09:29:26.061425 | orchestrator | Monday 10 February 2025 09:23:05 +0000 (0:00:00.347) 0:02:04.687 ******* 2025-02-10 09:29:26.061438 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:29:26.061452 | orchestrator | 2025-02-10 09:29:26.061466 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-02-10 09:29:26.061480 | orchestrator | Monday 10 February 2025 09:23:06 +0000 (0:00:01.118) 0:02:05.805 ******* 2025-02-10 09:29:26.061494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-02-10 09:29:26.061522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-02-10 09:29:26.061539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-02-10 09:29:26.061553 | orchestrator | 2025-02-10 09:29:26.061567 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-02-10 09:29:26.061588 | orchestrator | Monday 10 February 2025 09:23:10 +0000 (0:00:03.837) 0:02:09.642 ******* 2025-02-10 09:29:26.061613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-02-10 09:29:26.061628 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.061643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-02-10 09:29:26.061657 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.061679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-02-10 09:29:26.061694 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.061708 | orchestrator | 2025-02-10 09:29:26.061722 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-02-10 09:29:26.061735 | orchestrator | Monday 10 February 2025 09:23:12 +0000 (0:00:02.479) 0:02:12.122 ******* 2025-02-10 09:29:26.061750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-02-10 09:29:26.061772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-02-10 09:29:26.061793 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.061808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-02-10 09:29:26.061823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-02-10 09:29:26.061837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-02-10 09:29:26.061871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-02-10 09:29:26.061886 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.061901 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.061914 | orchestrator | 2025-02-10 09:29:26.061928 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-02-10 09:29:26.061942 | orchestrator | Monday 10 February 2025 09:23:15 +0000 (0:00:02.993) 0:02:15.116 ******* 2025-02-10 09:29:26.061955 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.061969 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.061983 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.061996 | orchestrator | 2025-02-10 09:29:26.062011 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-02-10 09:29:26.062064 | orchestrator | Monday 10 February 2025 09:23:16 +0000 (0:00:00.581) 0:02:15.697 ******* 2025-02-10 09:29:26.062079 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.062094 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.062108 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.062121 | orchestrator | 2025-02-10 09:29:26.062135 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-02-10 09:29:26.062161 | orchestrator | Monday 10 February 2025 09:23:17 +0000 (0:00:01.511) 0:02:17.208 ******* 2025-02-10 09:29:26.062175 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:29:26.062189 | orchestrator | 2025-02-10 09:29:26.062209 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-02-10 09:29:26.062223 | orchestrator | Monday 10 February 2025 09:23:19 +0000 (0:00:01.473) 0:02:18.681 ******* 2025-02-10 09:29:26.062256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-10 09:29:26.062299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.062314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.062329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.062344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-10 09:29:26.062378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-10 09:29:26.062401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.062416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.062430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.062454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.062477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.062499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.062513 | orchestrator | 2025-02-10 09:29:26.062527 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-02-10 09:29:26.062541 | orchestrator | Monday 10 February 2025 09:23:26 +0000 (0:00:06.953) 0:02:25.635 ******* 2025-02-10 09:29:26.062555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:29:26.062579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.062594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.062616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.062637 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.062652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:29:26.062667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.062690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.062706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.062720 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.062741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:29:26.062762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.062777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.062801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.062815 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.062829 | orchestrator | 2025-02-10 09:29:26.062843 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-02-10 09:29:26.062883 | orchestrator | Monday 10 February 2025 09:23:28 +0000 (0:00:02.564) 0:02:28.199 ******* 2025-02-10 09:29:26.062898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-02-10 09:29:26.062922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-02-10 09:29:26.062948 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.062973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-02-10 09:29:26.062998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-02-10 09:29:26.063035 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.063063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-02-10 09:29:26.063099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-02-10 09:29:26.063126 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.063153 | orchestrator | 2025-02-10 09:29:26.063179 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-02-10 09:29:26.063205 | orchestrator | Monday 10 February 2025 09:23:32 +0000 (0:00:03.775) 0:02:31.975 ******* 2025-02-10 09:29:26.063231 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.063258 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.063284 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.063310 | orchestrator | 2025-02-10 09:29:26.063336 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-02-10 09:29:26.063363 | orchestrator | Monday 10 February 2025 09:23:33 +0000 (0:00:00.816) 0:02:32.791 ******* 2025-02-10 09:29:26.063390 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.063417 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.063443 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.063470 | orchestrator | 2025-02-10 09:29:26.063496 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-02-10 09:29:26.063522 | orchestrator | Monday 10 February 2025 09:23:35 +0000 (0:00:01.763) 0:02:34.555 ******* 2025-02-10 09:29:26.063549 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.063574 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.063600 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.063627 | orchestrator | 2025-02-10 09:29:26.063654 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-02-10 09:29:26.063680 | orchestrator | Monday 10 February 2025 09:23:35 +0000 (0:00:00.464) 0:02:35.019 ******* 2025-02-10 09:29:26.063706 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.063731 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.063758 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.063784 | orchestrator | 2025-02-10 09:29:26.063810 | orchestrator | TASK [include_role : designate] ************************************************ 2025-02-10 09:29:26.063836 | orchestrator | Monday 10 February 2025 09:23:36 +0000 (0:00:00.561) 0:02:35.581 ******* 2025-02-10 09:29:26.063927 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:29:26.063954 | orchestrator | 2025-02-10 09:29:26.063978 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-02-10 09:29:26.064002 | orchestrator | Monday 10 February 2025 09:23:37 +0000 (0:00:01.221) 0:02:36.802 ******* 2025-02-10 09:29:26.064027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-10 09:29:26.064064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-10 09:29:26.064090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.064146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.064173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.064198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.064223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.064249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-10 09:29:26.064304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-10 09:29:26.064341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.064367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.064392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.064416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.064440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.064486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-10 09:29:26.064516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-10 09:29:26.064538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.064560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.064581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.064614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.064649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.064672 | orchestrator | 2025-02-10 09:29:26.064692 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-02-10 09:29:26.064715 | orchestrator | Monday 10 February 2025 09:23:45 +0000 (0:00:07.918) 0:02:44.721 ******* 2025-02-10 09:29:26.064744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-10 09:29:26.064766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-10 09:29:26.064787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.064809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.064878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.064903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.064928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.064951 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.064986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-10 09:29:26.065009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-10 09:29:26.065031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.065065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.065079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.065092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.065113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.065126 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.065139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-10 09:29:26.065167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-10 09:29:26.065181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.065194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.065207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.065227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.065240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.065253 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.065267 | orchestrator | 2025-02-10 09:29:26.065289 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-02-10 09:29:26.065319 | orchestrator | Monday 10 February 2025 09:23:48 +0000 (0:00:02.741) 0:02:47.462 ******* 2025-02-10 09:29:26.065341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-02-10 09:29:26.065364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-02-10 09:29:26.065386 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.065400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-02-10 09:29:26.065413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-02-10 09:29:26.065425 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.065437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-02-10 09:29:26.065450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-02-10 09:29:26.065462 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.065474 | orchestrator | 2025-02-10 09:29:26.065486 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-02-10 09:29:26.065498 | orchestrator | Monday 10 February 2025 09:23:52 +0000 (0:00:04.111) 0:02:51.574 ******* 2025-02-10 09:29:26.065510 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.065523 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.065535 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.065547 | orchestrator | 2025-02-10 09:29:26.065559 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-02-10 09:29:26.065571 | orchestrator | Monday 10 February 2025 09:23:53 +0000 (0:00:00.825) 0:02:52.399 ******* 2025-02-10 09:29:26.065584 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.065596 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.065608 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.065620 | orchestrator | 2025-02-10 09:29:26.065633 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-02-10 09:29:26.065645 | orchestrator | Monday 10 February 2025 09:23:55 +0000 (0:00:01.962) 0:02:54.362 ******* 2025-02-10 09:29:26.065657 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.065669 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.065681 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.065693 | orchestrator | 2025-02-10 09:29:26.065706 | orchestrator | TASK [include_role : glance] *************************************************** 2025-02-10 09:29:26.065718 | orchestrator | Monday 10 February 2025 09:23:55 +0000 (0:00:00.643) 0:02:55.005 ******* 2025-02-10 09:29:26.065731 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:29:26.065744 | orchestrator | 2025-02-10 09:29:26.065756 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-02-10 09:29:26.065768 | orchestrator | Monday 10 February 2025 09:23:57 +0000 (0:00:01.828) 0:02:56.833 ******* 2025-02-10 09:29:26.065805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-02-10 09:29:26.065840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-10 09:29:26.065896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-02-10 09:29:26.065943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-10 09:29:26.065964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-02-10 09:29:26.066262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-10 09:29:26.066308 | orchestrator | 2025-02-10 09:29:26.066322 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-02-10 09:29:26.066335 | orchestrator | Monday 10 February 2025 09:24:11 +0000 (0:00:14.214) 0:03:11.048 ******* 2025-02-10 09:29:26.066451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-02-10 09:29:26.066484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-10 09:29:26.066507 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.066595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-02-10 09:29:26.066623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-10 09:29:26.066648 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.066770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-02-10 09:29:26.066806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-10 09:29:26.066842 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.066893 | orchestrator | 2025-02-10 09:29:26.066915 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-02-10 09:29:26.066937 | orchestrator | Monday 10 February 2025 09:24:19 +0000 (0:00:07.461) 0:03:18.509 ******* 2025-02-10 09:29:26.066960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-02-10 09:29:26.066982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-02-10 09:29:26.067003 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.067136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-02-10 09:29:26.067160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-02-10 09:29:26.067184 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.067198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-02-10 09:29:26.067211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-02-10 09:29:26.067224 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.067237 | orchestrator | 2025-02-10 09:29:26.067249 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-02-10 09:29:26.067262 | orchestrator | Monday 10 February 2025 09:24:25 +0000 (0:00:06.345) 0:03:24.855 ******* 2025-02-10 09:29:26.067274 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.067286 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.067299 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.067311 | orchestrator | 2025-02-10 09:29:26.067324 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-02-10 09:29:26.067336 | orchestrator | Monday 10 February 2025 09:24:25 +0000 (0:00:00.369) 0:03:25.225 ******* 2025-02-10 09:29:26.067349 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.067361 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.067373 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.067386 | orchestrator | 2025-02-10 09:29:26.067405 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-02-10 09:29:26.067425 | orchestrator | Monday 10 February 2025 09:24:27 +0000 (0:00:01.520) 0:03:26.745 ******* 2025-02-10 09:29:26.067446 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.067467 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.067487 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.067507 | orchestrator | 2025-02-10 09:29:26.067529 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-02-10 09:29:26.067551 | orchestrator | Monday 10 February 2025 09:24:28 +0000 (0:00:00.550) 0:03:27.295 ******* 2025-02-10 09:29:26.067572 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:29:26.067587 | orchestrator | 2025-02-10 09:29:26.067599 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-02-10 09:29:26.067611 | orchestrator | Monday 10 February 2025 09:24:29 +0000 (0:00:01.287) 0:03:28.583 ******* 2025-02-10 09:29:26.067624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-10 09:29:26.067737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-10 09:29:26.067775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-10 09:29:26.067789 | orchestrator | 2025-02-10 09:29:26.067801 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-02-10 09:29:26.067814 | orchestrator | Monday 10 February 2025 09:24:33 +0000 (0:00:04.407) 0:03:32.991 ******* 2025-02-10 09:29:26.067826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-02-10 09:29:26.067839 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.067914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-02-10 09:29:26.067931 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.067944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-02-10 09:29:26.067956 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.067982 | orchestrator | 2025-02-10 09:29:26.067995 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-02-10 09:29:26.068007 | orchestrator | Monday 10 February 2025 09:24:34 +0000 (0:00:00.631) 0:03:33.623 ******* 2025-02-10 09:29:26.068020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-02-10 09:29:26.068122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-02-10 09:29:26.068138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-02-10 09:29:26.068149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-02-10 09:29:26.068159 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.068170 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.068180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-02-10 09:29:26.068190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-02-10 09:29:26.068200 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.068211 | orchestrator | 2025-02-10 09:29:26.068221 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-02-10 09:29:26.068232 | orchestrator | Monday 10 February 2025 09:24:35 +0000 (0:00:01.111) 0:03:34.734 ******* 2025-02-10 09:29:26.068242 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.068252 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.068262 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.068272 | orchestrator | 2025-02-10 09:29:26.068282 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-02-10 09:29:26.068292 | orchestrator | Monday 10 February 2025 09:24:35 +0000 (0:00:00.384) 0:03:35.119 ******* 2025-02-10 09:29:26.068302 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.068312 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.068322 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.068332 | orchestrator | 2025-02-10 09:29:26.068342 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-02-10 09:29:26.068352 | orchestrator | Monday 10 February 2025 09:24:37 +0000 (0:00:01.429) 0:03:36.548 ******* 2025-02-10 09:29:26.068362 | orchestrator | included: heat for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:29:26.068372 | orchestrator | 2025-02-10 09:29:26.068383 | orchestrator | TASK [haproxy-config : Copying over heat haproxy config] *********************** 2025-02-10 09:29:26.068393 | orchestrator | Monday 10 February 2025 09:24:38 +0000 (0:00:01.242) 0:03:37.791 ******* 2025-02-10 09:29:26.068404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-02-10 09:29:26.068424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-02-10 09:29:26.068491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-02-10 09:29:26.068511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-02-10 09:29:26.068523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.068546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-02-10 09:29:26.068564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.068632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-02-10 09:29:26.068648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.068659 | orchestrator | 2025-02-10 09:29:26.068670 | orchestrator | TASK [haproxy-config : Add configuration for heat when using single external frontend] *** 2025-02-10 09:29:26.068680 | orchestrator | Monday 10 February 2025 09:24:46 +0000 (0:00:07.882) 0:03:45.674 ******* 2025-02-10 09:29:26.068699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-02-10 09:29:26.068713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-02-10 09:29:26.068731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.068742 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.068804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-02-10 09:29:26.068824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-02-10 09:29:26.068835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.068845 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.068894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-02-10 09:29:26.068922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-02-10 09:29:26.069024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.069053 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.069076 | orchestrator | 2025-02-10 09:29:26.069095 | orchestrator | TASK [haproxy-config : Configuring firewall for heat] ************************** 2025-02-10 09:29:26.069114 | orchestrator | Monday 10 February 2025 09:24:47 +0000 (0:00:01.184) 0:03:46.858 ******* 2025-02-10 09:29:26.069136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-02-10 09:29:26.069156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-02-10 09:29:26.069174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-02-10 09:29:26.069193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-02-10 09:29:26.069212 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.069230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-02-10 09:29:26.069250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-02-10 09:29:26.069296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-02-10 09:29:26.069308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-02-10 09:29:26.069318 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.069329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-02-10 09:29:26.069340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-02-10 09:29:26.069350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-02-10 09:29:26.069361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-02-10 09:29:26.069371 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.069382 | orchestrator | 2025-02-10 09:29:26.069396 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL users config] *************** 2025-02-10 09:29:26.069407 | orchestrator | Monday 10 February 2025 09:24:49 +0000 (0:00:01.944) 0:03:48.803 ******* 2025-02-10 09:29:26.069417 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.069426 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.069436 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.069446 | orchestrator | 2025-02-10 09:29:26.069456 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL rules config] *************** 2025-02-10 09:29:26.069466 | orchestrator | Monday 10 February 2025 09:24:50 +0000 (0:00:00.608) 0:03:49.411 ******* 2025-02-10 09:29:26.069476 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.069486 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.069496 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.069506 | orchestrator | 2025-02-10 09:29:26.069516 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-02-10 09:29:26.069526 | orchestrator | Monday 10 February 2025 09:24:51 +0000 (0:00:01.742) 0:03:51.154 ******* 2025-02-10 09:29:26.069536 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:29:26.069546 | orchestrator | 2025-02-10 09:29:26.069562 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-02-10 09:29:26.069576 | orchestrator | Monday 10 February 2025 09:24:53 +0000 (0:00:01.247) 0:03:52.401 ******* 2025-02-10 09:29:26.069725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-10 09:29:26.069832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-10 09:29:26.069877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-10 09:29:26.069928 | orchestrator | 2025-02-10 09:29:26.069949 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-02-10 09:29:26.069965 | orchestrator | Monday 10 February 2025 09:24:59 +0000 (0:00:06.316) 0:03:58.717 ******* 2025-02-10 09:29:26.070129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-02-10 09:29:26.070172 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.070190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-02-10 09:29:26.070224 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.070345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-02-10 09:29:26.070386 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.070397 | orchestrator | 2025-02-10 09:29:26.070407 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-02-10 09:29:26.070417 | orchestrator | Monday 10 February 2025 09:25:01 +0000 (0:00:02.035) 0:04:00.753 ******* 2025-02-10 09:29:26.070428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-02-10 09:29:26.070441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-02-10 09:29:26.070452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-02-10 09:29:26.070464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-02-10 09:29:26.070475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-02-10 09:29:26.070487 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.070502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-02-10 09:29:26.070513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-02-10 09:29:26.070524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-02-10 09:29:26.070598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-02-10 09:29:26.070619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-02-10 09:29:26.070637 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.070647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-02-10 09:29:26.070657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-02-10 09:29:26.070669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-02-10 09:29:26.070686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-02-10 09:29:26.070703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-02-10 09:29:26.070720 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.070740 | orchestrator | 2025-02-10 09:29:26.070759 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-02-10 09:29:26.070771 | orchestrator | Monday 10 February 2025 09:25:03 +0000 (0:00:01.746) 0:04:02.500 ******* 2025-02-10 09:29:26.070781 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.070790 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.070800 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.070810 | orchestrator | 2025-02-10 09:29:26.070820 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-02-10 09:29:26.070830 | orchestrator | Monday 10 February 2025 09:25:03 +0000 (0:00:00.562) 0:04:03.063 ******* 2025-02-10 09:29:26.070841 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.070965 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.070998 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.071008 | orchestrator | 2025-02-10 09:29:26.071019 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-02-10 09:29:26.071029 | orchestrator | Monday 10 February 2025 09:25:05 +0000 (0:00:01.553) 0:04:04.616 ******* 2025-02-10 09:29:26.071039 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.071057 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.071067 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.071078 | orchestrator | 2025-02-10 09:29:26.071088 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-02-10 09:29:26.071097 | orchestrator | Monday 10 February 2025 09:25:05 +0000 (0:00:00.545) 0:04:05.162 ******* 2025-02-10 09:29:26.071108 | orchestrator | included: ironic for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:29:26.071118 | orchestrator | 2025-02-10 09:29:26.071128 | orchestrator | TASK [haproxy-config : Copying over ironic haproxy config] ********************* 2025-02-10 09:29:26.071139 | orchestrator | Monday 10 February 2025 09:25:07 +0000 (0:00:01.100) 0:04:06.262 ******* 2025-02-10 09:29:26.071151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-api:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-10 09:29:26.071263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-conductor:24.1.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.071284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-api:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-10 09:29:26.071294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-api:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-10 09:29:26.071304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-conductor:24.1.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.071313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-conductor:24.1.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.071380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-inspector:12.1.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-10 09:29:26.071399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:29:26.071492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:29:26.071504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/dnsmasq:2.90.20241206', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:29:26.071515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-prometheus-exporter:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-02-10 09:29:26.071530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-inspector:12.1.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-10 09:29:26.071591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:29:26.071611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:29:26.071620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/dnsmasq:2.90.20241206', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:29:26.071630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-prometheus-exporter:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-02-10 09:29:26.071640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-inspector:12.1.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-10 09:29:26.071656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:29:26.071713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:29:26.071731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/dnsmasq:2.90.20241206', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:29:26.071741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-prometheus-exporter:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-02-10 09:29:26.071751 | orchestrator | 2025-02-10 09:29:26.071760 | orchestrator | TASK [haproxy-config : Add configuration for ironic when using single external frontend] *** 2025-02-10 09:29:26.071770 | orchestrator | Monday 10 February 2025 09:25:15 +0000 (0:00:08.507) 0:04:14.770 ******* 2025-02-10 09:29:26.071780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-api:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-10 09:29:26.071790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-conductor:24.1.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.071878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-inspector:12.1.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-10 09:29:26.071897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:29:26.071907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:29:26.071916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/dnsmasq:2.90.20241206', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:29:26.071925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-prometheus-exporter:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-02-10 09:29:26.071936 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.071946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-api:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-10 09:29:26.071962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-conductor:24.1.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.072029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-inspector:12.1.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-10 09:29:26.072048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:29:26.072058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:29:26.072068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/dnsmasq:2.90.20241206', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:29:26.072087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-prometheus-exporter:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-02-10 09:29:26.072096 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.072106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-api:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-10 09:29:26.072163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-conductor:24.1.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.072180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-inspector:12.1.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-10 09:29:26.072190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:29:26.072205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:29:26.072214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/dnsmasq:2.90.20241206', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:29:26.072223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-prometheus-exporter:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-02-10 09:29:26.072232 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.072240 | orchestrator | 2025-02-10 09:29:26.072249 | orchestrator | TASK [haproxy-config : Configuring firewall for ironic] ************************ 2025-02-10 09:29:26.072301 | orchestrator | Monday 10 February 2025 09:25:16 +0000 (0:00:01.261) 0:04:16.031 ******* 2025-02-10 09:29:26.072318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}})  2025-02-10 09:29:26.072328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}})  2025-02-10 09:29:26.072338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic_inspector', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}})  2025-02-10 09:29:26.072348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic_inspector_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}})  2025-02-10 09:29:26.072357 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.072366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}})  2025-02-10 09:29:26.072374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}})  2025-02-10 09:29:26.072386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic_inspector', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}})  2025-02-10 09:29:26.072395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic_inspector_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}})  2025-02-10 09:29:26.072410 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.072419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}})  2025-02-10 09:29:26.072428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}})  2025-02-10 09:29:26.072437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic_inspector', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}})  2025-02-10 09:29:26.072445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic_inspector_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}})  2025-02-10 09:29:26.072454 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.072462 | orchestrator | 2025-02-10 09:29:26.072471 | orchestrator | TASK [proxysql-config : Copying over ironic ProxySQL users config] ************* 2025-02-10 09:29:26.072479 | orchestrator | Monday 10 February 2025 09:25:18 +0000 (0:00:01.644) 0:04:17.675 ******* 2025-02-10 09:29:26.072488 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.072497 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.072505 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.072513 | orchestrator | 2025-02-10 09:29:26.072522 | orchestrator | TASK [proxysql-config : Copying over ironic ProxySQL rules config] ************* 2025-02-10 09:29:26.072539 | orchestrator | Monday 10 February 2025 09:25:18 +0000 (0:00:00.561) 0:04:18.237 ******* 2025-02-10 09:29:26.072548 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.072556 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.072565 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.072573 | orchestrator | 2025-02-10 09:29:26.072582 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-02-10 09:29:26.072594 | orchestrator | Monday 10 February 2025 09:25:20 +0000 (0:00:01.510) 0:04:19.748 ******* 2025-02-10 09:29:26.072604 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:29:26.072612 | orchestrator | 2025-02-10 09:29:26.072621 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-02-10 09:29:26.072630 | orchestrator | Monday 10 February 2025 09:25:22 +0000 (0:00:01.548) 0:04:21.297 ******* 2025-02-10 09:29:26.072689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-02-10 09:29:26.072706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-10 09:29:26.072726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-10 09:29:26.072735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-02-10 09:29:26.072745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-10 09:29:26.072798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-10 09:29:26.072815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-02-10 09:29:26.072831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-10 09:29:26.072840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-10 09:29:26.072863 | orchestrator | 2025-02-10 09:29:26.072873 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-02-10 09:29:26.072882 | orchestrator | Monday 10 February 2025 09:25:27 +0000 (0:00:05.192) 0:04:26.489 ******* 2025-02-10 09:29:26.072891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-02-10 09:29:26.072953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-10 09:29:26.072972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-02-10 09:29:26.072991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-10 09:29:26.073001 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.073010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-10 09:29:26.073021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-10 09:29:26.073031 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.073090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-02-10 09:29:26.073109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-10 09:29:26.073128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-10 09:29:26.073138 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.073147 | orchestrator | 2025-02-10 09:29:26.073157 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-02-10 09:29:26.073166 | orchestrator | Monday 10 February 2025 09:25:28 +0000 (0:00:01.038) 0:04:27.528 ******* 2025-02-10 09:29:26.073180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-02-10 09:29:26.073193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-02-10 09:29:26.073204 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.073214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-02-10 09:29:26.073223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-02-10 09:29:26.073232 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.073241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-02-10 09:29:26.073251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-02-10 09:29:26.073260 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.073269 | orchestrator | 2025-02-10 09:29:26.073278 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-02-10 09:29:26.073287 | orchestrator | Monday 10 February 2025 09:25:29 +0000 (0:00:01.631) 0:04:29.160 ******* 2025-02-10 09:29:26.073295 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.073304 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.073313 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.073322 | orchestrator | 2025-02-10 09:29:26.073331 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-02-10 09:29:26.073346 | orchestrator | Monday 10 February 2025 09:25:30 +0000 (0:00:00.373) 0:04:29.533 ******* 2025-02-10 09:29:26.073355 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.073363 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.073373 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.073382 | orchestrator | 2025-02-10 09:29:26.073391 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-02-10 09:29:26.073400 | orchestrator | Monday 10 February 2025 09:25:31 +0000 (0:00:01.516) 0:04:31.049 ******* 2025-02-10 09:29:26.073462 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.073480 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.073490 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.073499 | orchestrator | 2025-02-10 09:29:26.073508 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-02-10 09:29:26.073518 | orchestrator | Monday 10 February 2025 09:25:32 +0000 (0:00:00.530) 0:04:31.580 ******* 2025-02-10 09:29:26.073527 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:29:26.073536 | orchestrator | 2025-02-10 09:29:26.073545 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-02-10 09:29:26.073554 | orchestrator | Monday 10 February 2025 09:25:33 +0000 (0:00:01.543) 0:04:33.123 ******* 2025-02-10 09:29:26.073564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:29:26.073575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.073586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:29:26.073603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.073663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:29:26.073681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.073690 | orchestrator | 2025-02-10 09:29:26.073699 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-02-10 09:29:26.073709 | orchestrator | Monday 10 February 2025 09:25:38 +0000 (0:00:04.270) 0:04:37.394 ******* 2025-02-10 09:29:26.073718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-02-10 09:29:26.073727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.073744 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.073801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-02-10 09:29:26.073819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.073829 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.073839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-02-10 09:29:26.073885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.073896 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.073912 | orchestrator | 2025-02-10 09:29:26.073921 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-02-10 09:29:26.073930 | orchestrator | Monday 10 February 2025 09:25:39 +0000 (0:00:00.918) 0:04:38.313 ******* 2025-02-10 09:29:26.073939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-02-10 09:29:26.073948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-02-10 09:29:26.073957 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.073966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-02-10 09:29:26.073975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-02-10 09:29:26.073984 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.073993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-02-10 09:29:26.074074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-02-10 09:29:26.074094 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.074104 | orchestrator | 2025-02-10 09:29:26.074113 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-02-10 09:29:26.074122 | orchestrator | Monday 10 February 2025 09:25:40 +0000 (0:00:01.315) 0:04:39.628 ******* 2025-02-10 09:29:26.074131 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.074140 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.074149 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.074158 | orchestrator | 2025-02-10 09:29:26.074167 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-02-10 09:29:26.074175 | orchestrator | Monday 10 February 2025 09:25:40 +0000 (0:00:00.344) 0:04:39.973 ******* 2025-02-10 09:29:26.074185 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.074194 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.074203 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.074212 | orchestrator | 2025-02-10 09:29:26.074221 | orchestrator | TASK [include_role : manila] *************************************************** 2025-02-10 09:29:26.074230 | orchestrator | Monday 10 February 2025 09:25:42 +0000 (0:00:01.526) 0:04:41.499 ******* 2025-02-10 09:29:26.074240 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:29:26.074249 | orchestrator | 2025-02-10 09:29:26.074257 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-02-10 09:29:26.074266 | orchestrator | Monday 10 February 2025 09:25:43 +0000 (0:00:01.607) 0:04:43.106 ******* 2025-02-10 09:29:26.074276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-02-10 09:29:26.074293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.074304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.074313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.074375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-02-10 09:29:26.074389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.074400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.074420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.074431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-02-10 09:29:26.074441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.074486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.074516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.074526 | orchestrator | 2025-02-10 09:29:26.074536 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-02-10 09:29:26.074550 | orchestrator | Monday 10 February 2025 09:25:48 +0000 (0:00:04.881) 0:04:47.988 ******* 2025-02-10 09:29:26.074566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-02-10 09:29:26.074590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.074601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.074661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.074675 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.074689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-02-10 09:29:26.074708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.074724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.074734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.074744 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.074753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-02-10 09:29:26.074811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.074837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.074847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.074911 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.074920 | orchestrator | 2025-02-10 09:29:26.074929 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-02-10 09:29:26.074938 | orchestrator | Monday 10 February 2025 09:25:49 +0000 (0:00:00.913) 0:04:48.902 ******* 2025-02-10 09:29:26.074948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-02-10 09:29:26.074957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-02-10 09:29:26.074966 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.074975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-02-10 09:29:26.074984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-02-10 09:29:26.074993 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.075002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-02-10 09:29:26.075012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-02-10 09:29:26.075021 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.075029 | orchestrator | 2025-02-10 09:29:26.075038 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-02-10 09:29:26.075047 | orchestrator | Monday 10 February 2025 09:25:51 +0000 (0:00:01.410) 0:04:50.313 ******* 2025-02-10 09:29:26.075056 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.075064 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.075073 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.075082 | orchestrator | 2025-02-10 09:29:26.075091 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-02-10 09:29:26.075099 | orchestrator | Monday 10 February 2025 09:25:51 +0000 (0:00:00.598) 0:04:50.911 ******* 2025-02-10 09:29:26.075108 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.075121 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.075130 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.075139 | orchestrator | 2025-02-10 09:29:26.075147 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-02-10 09:29:26.075156 | orchestrator | Monday 10 February 2025 09:25:53 +0000 (0:00:01.616) 0:04:52.527 ******* 2025-02-10 09:29:26.075164 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:29:26.075173 | orchestrator | 2025-02-10 09:29:26.075182 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-02-10 09:29:26.075191 | orchestrator | Monday 10 February 2025 09:25:54 +0000 (0:00:01.467) 0:04:53.995 ******* 2025-02-10 09:29:26.075200 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-02-10 09:29:26.075216 | orchestrator | 2025-02-10 09:29:26.075280 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-02-10 09:29:26.075293 | orchestrator | Monday 10 February 2025 09:25:58 +0000 (0:00:04.229) 0:04:58.225 ******* 2025-02-10 09:29:26.075303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-10 09:29:26.075328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-02-10 09:29:26.075381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-10 09:29:26.075414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-02-10 09:29:26.075425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-10 09:29:26.075435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-02-10 09:29:26.075444 | orchestrator | 2025-02-10 09:29:26.075453 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-02-10 09:29:26.075463 | orchestrator | Monday 10 February 2025 09:26:03 +0000 (0:00:04.716) 0:05:02.942 ******* 2025-02-10 09:29:26.075518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-02-10 09:29:26.075550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-02-10 09:29:26.075560 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.075570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-02-10 09:29:26.075638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-02-10 09:29:26.075655 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.075665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-02-10 09:29:26.075684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-02-10 09:29:26.075694 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.075704 | orchestrator | 2025-02-10 09:29:26.075713 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-02-10 09:29:26.075722 | orchestrator | Monday 10 February 2025 09:26:07 +0000 (0:00:03.592) 0:05:06.534 ******* 2025-02-10 09:29:26.075731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-02-10 09:29:26.075746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-02-10 09:29:26.075803 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.075816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-02-10 09:29:26.075831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-02-10 09:29:26.075841 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.075866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-02-10 09:29:26.075876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-02-10 09:29:26.075885 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.075893 | orchestrator | 2025-02-10 09:29:26.075902 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-02-10 09:29:26.075911 | orchestrator | Monday 10 February 2025 09:26:10 +0000 (0:00:03.709) 0:05:10.244 ******* 2025-02-10 09:29:26.075920 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.075929 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.075938 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.075947 | orchestrator | 2025-02-10 09:29:26.075955 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-02-10 09:29:26.075964 | orchestrator | Monday 10 February 2025 09:26:11 +0000 (0:00:00.363) 0:05:10.607 ******* 2025-02-10 09:29:26.075979 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.075987 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.075996 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.076005 | orchestrator | 2025-02-10 09:29:26.076013 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-02-10 09:29:26.076022 | orchestrator | Monday 10 February 2025 09:26:12 +0000 (0:00:01.644) 0:05:12.252 ******* 2025-02-10 09:29:26.076030 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.076039 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.076047 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.076056 | orchestrator | 2025-02-10 09:29:26.076064 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-02-10 09:29:26.076073 | orchestrator | Monday 10 February 2025 09:26:13 +0000 (0:00:00.583) 0:05:12.836 ******* 2025-02-10 09:29:26.076082 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:29:26.076090 | orchestrator | 2025-02-10 09:29:26.076099 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-02-10 09:29:26.076108 | orchestrator | Monday 10 February 2025 09:26:15 +0000 (0:00:01.685) 0:05:14.521 ******* 2025-02-10 09:29:26.076171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-02-10 09:29:26.076195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-02-10 09:29:26.076205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-02-10 09:29:26.076219 | orchestrator | 2025-02-10 09:29:26.076227 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-02-10 09:29:26.076236 | orchestrator | Monday 10 February 2025 09:26:16 +0000 (0:00:01.625) 0:05:16.147 ******* 2025-02-10 09:29:26.076245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-02-10 09:29:26.076260 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.076269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-02-10 09:29:26.076279 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.076341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-02-10 09:29:26.076355 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.076368 | orchestrator | 2025-02-10 09:29:26.076377 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-02-10 09:29:26.076386 | orchestrator | Monday 10 February 2025 09:26:17 +0000 (0:00:00.726) 0:05:16.873 ******* 2025-02-10 09:29:26.076394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-02-10 09:29:26.076404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-02-10 09:29:26.076413 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.076421 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.076430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-02-10 09:29:26.076439 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.076448 | orchestrator | 2025-02-10 09:29:26.076457 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-02-10 09:29:26.076466 | orchestrator | Monday 10 February 2025 09:26:18 +0000 (0:00:01.205) 0:05:18.079 ******* 2025-02-10 09:29:26.076480 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.076489 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.076498 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.076507 | orchestrator | 2025-02-10 09:29:26.076516 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-02-10 09:29:26.076525 | orchestrator | Monday 10 February 2025 09:26:19 +0000 (0:00:00.369) 0:05:18.448 ******* 2025-02-10 09:29:26.076534 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.076542 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.076551 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.076560 | orchestrator | 2025-02-10 09:29:26.076568 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-02-10 09:29:26.076577 | orchestrator | Monday 10 February 2025 09:26:20 +0000 (0:00:01.622) 0:05:20.071 ******* 2025-02-10 09:29:26.076585 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.076594 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.076602 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.076611 | orchestrator | 2025-02-10 09:29:26.076619 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-02-10 09:29:26.076628 | orchestrator | Monday 10 February 2025 09:26:21 +0000 (0:00:00.648) 0:05:20.719 ******* 2025-02-10 09:29:26.076637 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:29:26.076645 | orchestrator | 2025-02-10 09:29:26.076653 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-02-10 09:29:26.076662 | orchestrator | Monday 10 February 2025 09:26:23 +0000 (0:00:01.881) 0:05:22.600 ******* 2025-02-10 09:29:26.076671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:29:26.076728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.076741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.076758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.076780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:29:26.076789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.076799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:29:26.076897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:29:26.076915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.076932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:29:26.076941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.076960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:29:26.076970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:29:26.077030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.077044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:29:26.077071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:29:26.077081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:29:26.077090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.077145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.077158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.077179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.077188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:29:26.077197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.077206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:29:26.077226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:29:26.077284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.077303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:29:26.077317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.077334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:29:26.077344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:29:26.077353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.077409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:29:26.077440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:29:26.077452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:29:26.077462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.077472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.077530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.077555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.077566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:29:26.077575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.077594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:29:26.077604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:29:26.077659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.077682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:29:26.077692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.077711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:29:26.077721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:29:26.077730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.077785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:29:26.077817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:29:26.077827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.077837 | orchestrator | 2025-02-10 09:29:26.077846 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-02-10 09:29:26.077879 | orchestrator | Monday 10 February 2025 09:26:29 +0000 (0:00:05.968) 0:05:28.569 ******* 2025-02-10 09:29:26.077888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:29:26.077898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.077962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.077977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.078000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:29:26.078012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.078042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:29:26.078053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:29:26.078070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.078129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:29:26.078146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.078166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:29:26.078176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:29:26.078185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.078261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:29:26.078276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:29:26.078285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.078300 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.078309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:29:26.078318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.078333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.078390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.078408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:29:26.078419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.078436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:29:26.078447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:29:26.078462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.078521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:29:26.078540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:29:26.078550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.078569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.078585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:29:26.078595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.078654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:29:26.078668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.078690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.078700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:29:26.078718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:29:26.078774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.078799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:29:26.078810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.078820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:29:26.078830 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.078839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:29:26.078902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.078973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:29:26.078992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.079001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:29:26.079010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:29:26.079019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.079034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:29:26.079050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:29:26.079104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.079118 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.079132 | orchestrator | 2025-02-10 09:29:26.079141 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-02-10 09:29:26.079150 | orchestrator | Monday 10 February 2025 09:26:31 +0000 (0:00:02.395) 0:05:30.965 ******* 2025-02-10 09:29:26.079159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-02-10 09:29:26.079168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-02-10 09:29:26.079177 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.079186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-02-10 09:29:26.079195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-02-10 09:29:26.079204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-02-10 09:29:26.079220 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.079229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-02-10 09:29:26.079237 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.079246 | orchestrator | 2025-02-10 09:29:26.079254 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-02-10 09:29:26.079263 | orchestrator | Monday 10 February 2025 09:26:34 +0000 (0:00:02.377) 0:05:33.342 ******* 2025-02-10 09:29:26.079272 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.079281 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.079289 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.079298 | orchestrator | 2025-02-10 09:29:26.079308 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-02-10 09:29:26.079316 | orchestrator | Monday 10 February 2025 09:26:34 +0000 (0:00:00.596) 0:05:33.939 ******* 2025-02-10 09:29:26.079324 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.079333 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.079342 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.079350 | orchestrator | 2025-02-10 09:29:26.079359 | orchestrator | TASK [include_role : placement] ************************************************ 2025-02-10 09:29:26.079367 | orchestrator | Monday 10 February 2025 09:26:36 +0000 (0:00:01.685) 0:05:35.625 ******* 2025-02-10 09:29:26.079376 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:29:26.079384 | orchestrator | 2025-02-10 09:29:26.079393 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-02-10 09:29:26.079402 | orchestrator | Monday 10 February 2025 09:26:37 +0000 (0:00:01.477) 0:05:37.102 ******* 2025-02-10 09:29:26.079434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-10 09:29:26.079445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-10 09:29:26.079463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-10 09:29:26.079477 | orchestrator | 2025-02-10 09:29:26.079486 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-02-10 09:29:26.079494 | orchestrator | Monday 10 February 2025 09:26:42 +0000 (0:00:04.836) 0:05:41.938 ******* 2025-02-10 09:29:26.079503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-02-10 09:29:26.079511 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.079519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-02-10 09:29:26.079528 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.079555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-02-10 09:29:26.079569 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.079578 | orchestrator | 2025-02-10 09:29:26.079586 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-02-10 09:29:26.079594 | orchestrator | Monday 10 February 2025 09:26:43 +0000 (0:00:00.850) 0:05:42.789 ******* 2025-02-10 09:29:26.079601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-02-10 09:29:26.079614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-02-10 09:29:26.079622 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.079630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-02-10 09:29:26.079639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-02-10 09:29:26.079647 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.079655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-02-10 09:29:26.079663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-02-10 09:29:26.079672 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.079680 | orchestrator | 2025-02-10 09:29:26.079688 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-02-10 09:29:26.079696 | orchestrator | Monday 10 February 2025 09:26:44 +0000 (0:00:01.316) 0:05:44.106 ******* 2025-02-10 09:29:26.079705 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.079716 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.079724 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.079732 | orchestrator | 2025-02-10 09:29:26.079740 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-02-10 09:29:26.079748 | orchestrator | Monday 10 February 2025 09:26:45 +0000 (0:00:00.623) 0:05:44.729 ******* 2025-02-10 09:29:26.079756 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.079764 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.079772 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.079780 | orchestrator | 2025-02-10 09:29:26.079788 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-02-10 09:29:26.079799 | orchestrator | Monday 10 February 2025 09:26:47 +0000 (0:00:01.696) 0:05:46.425 ******* 2025-02-10 09:29:26.079807 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:29:26.079815 | orchestrator | 2025-02-10 09:29:26.079823 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-02-10 09:29:26.079832 | orchestrator | Monday 10 February 2025 09:26:49 +0000 (0:00:01.891) 0:05:48.316 ******* 2025-02-10 09:29:26.079889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-10 09:29:26.079909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.079918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.079927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-10 09:29:26.079935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.079968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.079991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-10 09:29:26.080001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.080009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.080018 | orchestrator | 2025-02-10 09:29:26.080026 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-02-10 09:29:26.080034 | orchestrator | Monday 10 February 2025 09:26:56 +0000 (0:00:07.288) 0:05:55.605 ******* 2025-02-10 09:29:26.080068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-02-10 09:29:26.080085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.080094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.080102 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.080111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-02-10 09:29:26.080126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.080135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.080147 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.080176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-02-10 09:29:26.080186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.080195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.080203 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.080211 | orchestrator | 2025-02-10 09:29:26.080219 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-02-10 09:29:26.080227 | orchestrator | Monday 10 February 2025 09:26:57 +0000 (0:00:00.966) 0:05:56.571 ******* 2025-02-10 09:29:26.080235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-02-10 09:29:26.080244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-02-10 09:29:26.080252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-02-10 09:29:26.080260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-02-10 09:29:26.080277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-02-10 09:29:26.080285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-02-10 09:29:26.080293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-02-10 09:29:26.080301 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.080328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-02-10 09:29:26.080337 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.080346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-02-10 09:29:26.080354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-02-10 09:29:26.080362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-02-10 09:29:26.080370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-02-10 09:29:26.080378 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.080386 | orchestrator | 2025-02-10 09:29:26.080394 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-02-10 09:29:26.080402 | orchestrator | Monday 10 February 2025 09:26:58 +0000 (0:00:01.499) 0:05:58.071 ******* 2025-02-10 09:29:26.080410 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.080418 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.080426 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.080434 | orchestrator | 2025-02-10 09:29:26.080442 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-02-10 09:29:26.080450 | orchestrator | Monday 10 February 2025 09:26:59 +0000 (0:00:00.603) 0:05:58.674 ******* 2025-02-10 09:29:26.080458 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.080466 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.080474 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.080482 | orchestrator | 2025-02-10 09:29:26.080489 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-02-10 09:29:26.080498 | orchestrator | Monday 10 February 2025 09:27:00 +0000 (0:00:01.454) 0:06:00.128 ******* 2025-02-10 09:29:26.080506 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:29:26.080514 | orchestrator | 2025-02-10 09:29:26.080522 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-02-10 09:29:26.080530 | orchestrator | Monday 10 February 2025 09:27:02 +0000 (0:00:01.888) 0:06:02.017 ******* 2025-02-10 09:29:26.080538 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-02-10 09:29:26.080551 | orchestrator | 2025-02-10 09:29:26.080559 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-02-10 09:29:26.080567 | orchestrator | Monday 10 February 2025 09:27:04 +0000 (0:00:01.811) 0:06:03.829 ******* 2025-02-10 09:29:26.080575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-02-10 09:29:26.080584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-02-10 09:29:26.080592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-02-10 09:29:26.080601 | orchestrator | 2025-02-10 09:29:26.080609 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-02-10 09:29:26.080617 | orchestrator | Monday 10 February 2025 09:27:10 +0000 (0:00:06.140) 0:06:09.969 ******* 2025-02-10 09:29:26.080653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-02-10 09:29:26.080663 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.080672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-02-10 09:29:26.080680 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.080689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-02-10 09:29:26.080697 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.080705 | orchestrator | 2025-02-10 09:29:26.080713 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-02-10 09:29:26.080726 | orchestrator | Monday 10 February 2025 09:27:12 +0000 (0:00:02.178) 0:06:12.147 ******* 2025-02-10 09:29:26.080735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-02-10 09:29:26.080743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-02-10 09:29:26.080751 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.080760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-02-10 09:29:26.080772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-02-10 09:29:26.080780 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.080788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-02-10 09:29:26.080796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-02-10 09:29:26.080804 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.080812 | orchestrator | 2025-02-10 09:29:26.080820 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-02-10 09:29:26.080829 | orchestrator | Monday 10 February 2025 09:27:15 +0000 (0:00:02.405) 0:06:14.553 ******* 2025-02-10 09:29:26.080837 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.080845 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.080868 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.080876 | orchestrator | 2025-02-10 09:29:26.080884 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-02-10 09:29:26.080892 | orchestrator | Monday 10 February 2025 09:27:15 +0000 (0:00:00.481) 0:06:15.034 ******* 2025-02-10 09:29:26.080900 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.080907 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.080916 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.080923 | orchestrator | 2025-02-10 09:29:26.080931 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-02-10 09:29:26.080939 | orchestrator | Monday 10 February 2025 09:27:17 +0000 (0:00:01.571) 0:06:16.606 ******* 2025-02-10 09:29:26.080967 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-02-10 09:29:26.080977 | orchestrator | 2025-02-10 09:29:26.080985 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-02-10 09:29:26.080993 | orchestrator | Monday 10 February 2025 09:27:18 +0000 (0:00:01.424) 0:06:18.031 ******* 2025-02-10 09:29:26.081001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-02-10 09:29:26.081016 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.081025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-02-10 09:29:26.081033 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.081041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-02-10 09:29:26.081049 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.081057 | orchestrator | 2025-02-10 09:29:26.081065 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-02-10 09:29:26.081073 | orchestrator | Monday 10 February 2025 09:27:20 +0000 (0:00:01.981) 0:06:20.012 ******* 2025-02-10 09:29:26.081082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-02-10 09:29:26.081090 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.081106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-02-10 09:29:26.081115 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.081123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-02-10 09:29:26.081131 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.081139 | orchestrator | 2025-02-10 09:29:26.081147 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-02-10 09:29:26.081155 | orchestrator | Monday 10 February 2025 09:27:22 +0000 (0:00:02.222) 0:06:22.235 ******* 2025-02-10 09:29:26.081186 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.081196 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.081204 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.081212 | orchestrator | 2025-02-10 09:29:26.081220 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-02-10 09:29:26.081234 | orchestrator | Monday 10 February 2025 09:27:25 +0000 (0:00:02.124) 0:06:24.359 ******* 2025-02-10 09:29:26.081242 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.081251 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.081259 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.081267 | orchestrator | 2025-02-10 09:29:26.081275 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-02-10 09:29:26.081283 | orchestrator | Monday 10 February 2025 09:27:25 +0000 (0:00:00.620) 0:06:24.980 ******* 2025-02-10 09:29:26.081291 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.081299 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.081307 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.081315 | orchestrator | 2025-02-10 09:29:26.081323 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-02-10 09:29:26.081331 | orchestrator | Monday 10 February 2025 09:27:26 +0000 (0:00:01.231) 0:06:26.211 ******* 2025-02-10 09:29:26.081339 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-02-10 09:29:26.081347 | orchestrator | 2025-02-10 09:29:26.081355 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-02-10 09:29:26.081363 | orchestrator | Monday 10 February 2025 09:27:28 +0000 (0:00:01.339) 0:06:27.551 ******* 2025-02-10 09:29:26.081371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-02-10 09:29:26.081379 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.081387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-02-10 09:29:26.081395 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.081404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-02-10 09:29:26.081412 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.081420 | orchestrator | 2025-02-10 09:29:26.081428 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-02-10 09:29:26.081436 | orchestrator | Monday 10 February 2025 09:27:30 +0000 (0:00:02.003) 0:06:29.554 ******* 2025-02-10 09:29:26.081444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-02-10 09:29:26.081457 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.081486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-02-10 09:29:26.081496 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.081511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-02-10 09:29:26.081520 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.081528 | orchestrator | 2025-02-10 09:29:26.081536 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-02-10 09:29:26.081544 | orchestrator | Monday 10 February 2025 09:27:32 +0000 (0:00:01.879) 0:06:31.434 ******* 2025-02-10 09:29:26.081552 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.081560 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.081569 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.081577 | orchestrator | 2025-02-10 09:29:26.081585 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-02-10 09:29:26.081593 | orchestrator | Monday 10 February 2025 09:27:34 +0000 (0:00:02.200) 0:06:33.635 ******* 2025-02-10 09:29:26.081601 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.081609 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.081617 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.081625 | orchestrator | 2025-02-10 09:29:26.081633 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-02-10 09:29:26.081641 | orchestrator | Monday 10 February 2025 09:27:34 +0000 (0:00:00.579) 0:06:34.215 ******* 2025-02-10 09:29:26.081649 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.081657 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.081664 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.081672 | orchestrator | 2025-02-10 09:29:26.081681 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-02-10 09:29:26.081689 | orchestrator | Monday 10 February 2025 09:27:36 +0000 (0:00:01.869) 0:06:36.084 ******* 2025-02-10 09:29:26.081697 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:29:26.081705 | orchestrator | 2025-02-10 09:29:26.081713 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-02-10 09:29:26.081724 | orchestrator | Monday 10 February 2025 09:27:38 +0000 (0:00:02.106) 0:06:38.191 ******* 2025-02-10 09:29:26.081733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-10 09:29:26.081746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-10 09:29:26.081755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-10 09:29:26.081784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-10 09:29:26.081801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-10 09:29:26.081810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-10 09:29:26.081818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-10 09:29:26.081834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.081843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-10 09:29:26.081917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.081935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-10 09:29:26.081943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-10 09:29:26.081950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-10 09:29:26.081963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-10 09:29:26.081971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.081978 | orchestrator | 2025-02-10 09:29:26.081985 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-02-10 09:29:26.081992 | orchestrator | Monday 10 February 2025 09:27:43 +0000 (0:00:04.801) 0:06:42.992 ******* 2025-02-10 09:29:26.082059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-02-10 09:29:26.082071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-10 09:29:26.082080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-10 09:29:26.082092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-10 09:29:26.082100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.082107 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.082151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-02-10 09:29:26.082161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-10 09:29:26.082169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-10 09:29:26.082176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-10 09:29:26.082190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.082197 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.082205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-02-10 09:29:26.082218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-10 09:29:26.082242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-10 09:29:26.082251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-10 09:29:26.082258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:29:26.082271 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.082278 | orchestrator | 2025-02-10 09:29:26.082286 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-02-10 09:29:26.082293 | orchestrator | Monday 10 February 2025 09:27:45 +0000 (0:00:01.313) 0:06:44.305 ******* 2025-02-10 09:29:26.082300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-02-10 09:29:26.082308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-02-10 09:29:26.082316 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.082323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-02-10 09:29:26.082331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-02-10 09:29:26.082338 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.082349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-02-10 09:29:26.082360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-02-10 09:29:26.082367 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.082374 | orchestrator | 2025-02-10 09:29:26.082381 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-02-10 09:29:26.082388 | orchestrator | Monday 10 February 2025 09:27:46 +0000 (0:00:01.586) 0:06:45.891 ******* 2025-02-10 09:29:26.082396 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.082403 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.082410 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.082417 | orchestrator | 2025-02-10 09:29:26.082424 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-02-10 09:29:26.082431 | orchestrator | Monday 10 February 2025 09:27:47 +0000 (0:00:00.634) 0:06:46.526 ******* 2025-02-10 09:29:26.082438 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.082445 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.082452 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.082460 | orchestrator | 2025-02-10 09:29:26.082484 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-02-10 09:29:26.082492 | orchestrator | Monday 10 February 2025 09:27:48 +0000 (0:00:01.526) 0:06:48.053 ******* 2025-02-10 09:29:26.082499 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:29:26.082506 | orchestrator | 2025-02-10 09:29:26.082513 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-02-10 09:29:26.082523 | orchestrator | Monday 10 February 2025 09:27:50 +0000 (0:00:02.099) 0:06:50.152 ******* 2025-02-10 09:29:26.082531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-10 09:29:26.082543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-10 09:29:26.082550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-10 09:29:26.082565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-10 09:29:26.082590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-10 09:29:26.082609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-10 09:29:26.082617 | orchestrator | 2025-02-10 09:29:26.082624 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-02-10 09:29:26.082631 | orchestrator | Monday 10 February 2025 09:27:58 +0000 (0:00:07.419) 0:06:57.571 ******* 2025-02-10 09:29:26.082639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-02-10 09:29:26.082662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-02-10 09:29:26.082671 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.082678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-02-10 09:29:26.082696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-02-10 09:29:26.082704 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.082711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-02-10 09:29:26.082718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-02-10 09:29:26.082741 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.082749 | orchestrator | 2025-02-10 09:29:26.082756 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-02-10 09:29:26.082770 | orchestrator | Monday 10 February 2025 09:27:59 +0000 (0:00:01.278) 0:06:58.850 ******* 2025-02-10 09:29:26.082778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-02-10 09:29:26.082785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-02-10 09:29:26.082792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-02-10 09:29:26.082801 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.082808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-02-10 09:29:26.082815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-02-10 09:29:26.082822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-02-10 09:29:26.082829 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.082837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-02-10 09:29:26.082844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-02-10 09:29:26.082865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-02-10 09:29:26.082872 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.082880 | orchestrator | 2025-02-10 09:29:26.082887 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-02-10 09:29:26.082894 | orchestrator | Monday 10 February 2025 09:28:01 +0000 (0:00:01.752) 0:07:00.603 ******* 2025-02-10 09:29:26.082901 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.082908 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.082915 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.082922 | orchestrator | 2025-02-10 09:29:26.082929 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-02-10 09:29:26.082936 | orchestrator | Monday 10 February 2025 09:28:02 +0000 (0:00:00.708) 0:07:01.312 ******* 2025-02-10 09:29:26.082943 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.082950 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.082957 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.082963 | orchestrator | 2025-02-10 09:29:26.082970 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-02-10 09:29:26.082977 | orchestrator | Monday 10 February 2025 09:28:04 +0000 (0:00:02.002) 0:07:03.314 ******* 2025-02-10 09:29:26.082984 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:29:26.082991 | orchestrator | 2025-02-10 09:29:26.082998 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-02-10 09:29:26.083010 | orchestrator | Monday 10 February 2025 09:28:06 +0000 (0:00:02.207) 0:07:05.521 ******* 2025-02-10 09:29:26.083036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-02-10 09:29:26.083045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:29:26.083053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:29:26.083067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:29:26.083075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:29:26.083082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-02-10 09:29:26.083089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:29:26.083101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:29:26.083125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:29:26.083134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:29:26.083144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-02-10 09:29:26.083154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:29:26.083161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:29:26.083173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:29:26.083180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:29:26.083205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-02-10 09:29:26.083220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:29:26.083227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:29:26.083235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:29:26.083247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-02-10 09:29:26.083271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:29:26.083280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:29:26.083288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:29:26.083295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:29:26.083308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:29:26.083320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:29:26.083327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:29:26.083343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-02-10 09:29:26.083351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:29:26.083358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:29:26.083365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:29:26.083376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:29:26.083383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:29:26.083390 | orchestrator | 2025-02-10 09:29:26.083398 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-02-10 09:29:26.083405 | orchestrator | Monday 10 February 2025 09:28:12 +0000 (0:00:05.920) 0:07:11.442 ******* 2025-02-10 09:29:26.083422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:29:26.083430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:29:26.083438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:29:26.083445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:29:26.083457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:29:26.083469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:29:26.083481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:29:26.083488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:29:26.083496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:29:26.083503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:29:26.083516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:29:26.083528 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.083536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:29:26.083546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:29:26.083553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:29:26.083561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:29:26.083568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:29:26.083580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:29:26.083592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:29:26.083604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:29:26.083611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:29:26.083619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:29:26.083626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:29:26.083638 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.083650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:29:26.083658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:29:26.083665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:29:26.083675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:29:26.083682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:29:26.083689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:29:26.083706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:29:26.083714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:29:26.083721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:29:26.083728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:29:26.083739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:29:26.083747 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.083753 | orchestrator | 2025-02-10 09:29:26.083761 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-02-10 09:29:26.083768 | orchestrator | Monday 10 February 2025 09:28:13 +0000 (0:00:01.586) 0:07:13.029 ******* 2025-02-10 09:29:26.083775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-02-10 09:29:26.083782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-02-10 09:29:26.083795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-02-10 09:29:26.083802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-02-10 09:29:26.083810 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.083817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-02-10 09:29:26.083824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-02-10 09:29:26.083831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-02-10 09:29:26.083839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-02-10 09:29:26.083846 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.083869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-02-10 09:29:26.083877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-02-10 09:29:26.083884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-02-10 09:29:26.083891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-02-10 09:29:26.083898 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.083905 | orchestrator | 2025-02-10 09:29:26.083912 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-02-10 09:29:26.083922 | orchestrator | Monday 10 February 2025 09:28:15 +0000 (0:00:01.965) 0:07:14.994 ******* 2025-02-10 09:29:26.083929 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.084011 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.084019 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.084026 | orchestrator | 2025-02-10 09:29:26.084033 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-02-10 09:29:26.084040 | orchestrator | Monday 10 February 2025 09:28:16 +0000 (0:00:00.629) 0:07:15.624 ******* 2025-02-10 09:29:26.084047 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.084059 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.084066 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.084073 | orchestrator | 2025-02-10 09:29:26.084080 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-02-10 09:29:26.084087 | orchestrator | Monday 10 February 2025 09:28:18 +0000 (0:00:01.859) 0:07:17.484 ******* 2025-02-10 09:29:26.084094 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:29:26.084101 | orchestrator | 2025-02-10 09:29:26.084108 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-02-10 09:29:26.084115 | orchestrator | Monday 10 February 2025 09:28:20 +0000 (0:00:01.870) 0:07:19.355 ******* 2025-02-10 09:29:26.084122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-10 09:29:26.084130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-10 09:29:26.084138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-10 09:29:26.084146 | orchestrator | 2025-02-10 09:29:26.084153 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-02-10 09:29:26.084160 | orchestrator | Monday 10 February 2025 09:28:24 +0000 (0:00:04.133) 0:07:23.488 ******* 2025-02-10 09:29:26.084175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-02-10 09:29:26.084183 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.084190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-02-10 09:29:26.084197 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.084205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-02-10 09:29:26.084212 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.084219 | orchestrator | 2025-02-10 09:29:26.084226 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-02-10 09:29:26.084233 | orchestrator | Monday 10 February 2025 09:28:25 +0000 (0:00:00.870) 0:07:24.359 ******* 2025-02-10 09:29:26.084240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-02-10 09:29:26.084247 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.084254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-02-10 09:29:26.084261 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.084268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-02-10 09:29:26.084279 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.084285 | orchestrator | 2025-02-10 09:29:26.084292 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-02-10 09:29:26.084299 | orchestrator | Monday 10 February 2025 09:28:26 +0000 (0:00:01.227) 0:07:25.586 ******* 2025-02-10 09:29:26.084306 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.084319 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.084328 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.084336 | orchestrator | 2025-02-10 09:29:26.084343 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-02-10 09:29:26.084353 | orchestrator | Monday 10 February 2025 09:28:27 +0000 (0:00:00.885) 0:07:26.472 ******* 2025-02-10 09:29:26.084360 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.084367 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.084374 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.084381 | orchestrator | 2025-02-10 09:29:26.084388 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-02-10 09:29:26.084395 | orchestrator | Monday 10 February 2025 09:28:28 +0000 (0:00:01.566) 0:07:28.039 ******* 2025-02-10 09:29:26.084402 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:29:26.084409 | orchestrator | 2025-02-10 09:29:26.084417 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-02-10 09:29:26.084424 | orchestrator | Monday 10 February 2025 09:28:30 +0000 (0:00:02.145) 0:07:30.184 ******* 2025-02-10 09:29:26.084431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-02-10 09:29:26.084439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-02-10 09:29:26.084447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-02-10 09:29:26.084462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-02-10 09:29:26.084471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-02-10 09:29:26.084478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-02-10 09:29:26.084485 | orchestrator | 2025-02-10 09:29:26.084492 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-02-10 09:29:26.084499 | orchestrator | Monday 10 February 2025 09:28:40 +0000 (0:00:09.468) 0:07:39.653 ******* 2025-02-10 09:29:26.084507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-02-10 09:29:26.084525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-02-10 09:29:26.084532 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.084540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-02-10 09:29:26.084547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-02-10 09:29:26.084554 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.084561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-02-10 09:29:26.084575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-02-10 09:29:26.084582 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.084589 | orchestrator | 2025-02-10 09:29:26.084596 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-02-10 09:29:26.084604 | orchestrator | Monday 10 February 2025 09:28:41 +0000 (0:00:01.417) 0:07:41.071 ******* 2025-02-10 09:29:26.084611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-02-10 09:29:26.084618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-02-10 09:29:26.084625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-02-10 09:29:26.084633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-02-10 09:29:26.084640 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.084647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-02-10 09:29:26.084654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-02-10 09:29:26.084661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-02-10 09:29:26.084668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-02-10 09:29:26.084675 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.084698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-02-10 09:29:26.084705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-02-10 09:29:26.084712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-02-10 09:29:26.084719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-02-10 09:29:26.084726 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.084733 | orchestrator | 2025-02-10 09:29:26.084740 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-02-10 09:29:26.084747 | orchestrator | Monday 10 February 2025 09:28:43 +0000 (0:00:01.903) 0:07:42.974 ******* 2025-02-10 09:29:26.084754 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.084761 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.084767 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.084774 | orchestrator | 2025-02-10 09:29:26.084781 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-02-10 09:29:26.084788 | orchestrator | Monday 10 February 2025 09:28:44 +0000 (0:00:00.391) 0:07:43.366 ******* 2025-02-10 09:29:26.084795 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.084802 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.084809 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.084815 | orchestrator | 2025-02-10 09:29:26.084822 | orchestrator | TASK [include_role : swift] **************************************************** 2025-02-10 09:29:26.084829 | orchestrator | Monday 10 February 2025 09:28:46 +0000 (0:00:02.158) 0:07:45.524 ******* 2025-02-10 09:29:26.084836 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.084843 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.084863 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.084870 | orchestrator | 2025-02-10 09:29:26.084877 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-02-10 09:29:26.084884 | orchestrator | Monday 10 February 2025 09:28:46 +0000 (0:00:00.684) 0:07:46.209 ******* 2025-02-10 09:29:26.084894 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.084902 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.084909 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.084915 | orchestrator | 2025-02-10 09:29:26.084922 | orchestrator | TASK [include_role : trove] **************************************************** 2025-02-10 09:29:26.084929 | orchestrator | Monday 10 February 2025 09:28:47 +0000 (0:00:00.638) 0:07:46.847 ******* 2025-02-10 09:29:26.084936 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.084943 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.084950 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.084957 | orchestrator | 2025-02-10 09:29:26.084964 | orchestrator | TASK [include_role : venus] **************************************************** 2025-02-10 09:29:26.084971 | orchestrator | Monday 10 February 2025 09:28:48 +0000 (0:00:00.682) 0:07:47.530 ******* 2025-02-10 09:29:26.084978 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.084985 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.084992 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.084998 | orchestrator | 2025-02-10 09:29:26.085005 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-02-10 09:29:26.085012 | orchestrator | Monday 10 February 2025 09:28:48 +0000 (0:00:00.346) 0:07:47.877 ******* 2025-02-10 09:29:26.085019 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.085031 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.085038 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.085045 | orchestrator | 2025-02-10 09:29:26.085052 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-02-10 09:29:26.085059 | orchestrator | Monday 10 February 2025 09:28:49 +0000 (0:00:00.653) 0:07:48.530 ******* 2025-02-10 09:29:26.085066 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.085073 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.085080 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.085087 | orchestrator | 2025-02-10 09:29:26.085094 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-02-10 09:29:26.085101 | orchestrator | Monday 10 February 2025 09:28:50 +0000 (0:00:00.979) 0:07:49.509 ******* 2025-02-10 09:29:26.085108 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:29:26.085115 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:29:26.085122 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:29:26.085129 | orchestrator | 2025-02-10 09:29:26.085136 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-02-10 09:29:26.085143 | orchestrator | Monday 10 February 2025 09:28:51 +0000 (0:00:01.139) 0:07:50.648 ******* 2025-02-10 09:29:26.085150 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:29:26.085157 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:29:26.085163 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:29:26.085170 | orchestrator | 2025-02-10 09:29:26.085177 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-02-10 09:29:26.085188 | orchestrator | Monday 10 February 2025 09:28:52 +0000 (0:00:00.703) 0:07:51.351 ******* 2025-02-10 09:29:26.085195 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:29:26.085203 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:29:26.085209 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:29:26.085216 | orchestrator | 2025-02-10 09:29:26.085223 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-02-10 09:29:26.085230 | orchestrator | Monday 10 February 2025 09:28:53 +0000 (0:00:01.442) 0:07:52.794 ******* 2025-02-10 09:29:26.085237 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:29:26.085244 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:29:26.085251 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:29:26.085258 | orchestrator | 2025-02-10 09:29:26.085265 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-02-10 09:29:26.085272 | orchestrator | Monday 10 February 2025 09:28:54 +0000 (0:00:01.124) 0:07:53.918 ******* 2025-02-10 09:29:26.085279 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:29:26.085285 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:29:26.085292 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:29:26.085299 | orchestrator | 2025-02-10 09:29:26.085306 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-02-10 09:29:26.085313 | orchestrator | Monday 10 February 2025 09:28:56 +0000 (0:00:01.473) 0:07:55.391 ******* 2025-02-10 09:29:26.085319 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:29:26.085330 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:29:26.085337 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:29:26.085344 | orchestrator | 2025-02-10 09:29:26.085351 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-02-10 09:29:26.085358 | orchestrator | Monday 10 February 2025 09:29:01 +0000 (0:00:05.564) 0:08:00.956 ******* 2025-02-10 09:29:26.085365 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:29:26.085372 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:29:26.085379 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:29:26.085386 | orchestrator | 2025-02-10 09:29:26.085393 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-02-10 09:29:26.085400 | orchestrator | Monday 10 February 2025 09:29:05 +0000 (0:00:03.496) 0:08:04.452 ******* 2025-02-10 09:29:26.085407 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.085414 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.085421 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.085433 | orchestrator | 2025-02-10 09:29:26.085440 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-02-10 09:29:26.085447 | orchestrator | Monday 10 February 2025 09:29:06 +0000 (0:00:01.224) 0:08:05.677 ******* 2025-02-10 09:29:26.085453 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:29:26.085460 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:29:26.085467 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:29:26.085474 | orchestrator | 2025-02-10 09:29:26.085481 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-02-10 09:29:26.085488 | orchestrator | Monday 10 February 2025 09:29:17 +0000 (0:00:11.013) 0:08:16.691 ******* 2025-02-10 09:29:26.085495 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.085502 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.085509 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.085515 | orchestrator | 2025-02-10 09:29:26.085522 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-02-10 09:29:26.085529 | orchestrator | Monday 10 February 2025 09:29:18 +0000 (0:00:00.975) 0:08:17.666 ******* 2025-02-10 09:29:26.085536 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.085543 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.085554 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.085561 | orchestrator | 2025-02-10 09:29:26.085568 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-02-10 09:29:26.085575 | orchestrator | Monday 10 February 2025 09:29:18 +0000 (0:00:00.445) 0:08:18.111 ******* 2025-02-10 09:29:26.085582 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.085589 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.085596 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.085603 | orchestrator | 2025-02-10 09:29:26.085610 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-02-10 09:29:26.085617 | orchestrator | Monday 10 February 2025 09:29:19 +0000 (0:00:00.755) 0:08:18.867 ******* 2025-02-10 09:29:26.085624 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.085631 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.085638 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.085645 | orchestrator | 2025-02-10 09:29:26.085652 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-02-10 09:29:26.085659 | orchestrator | Monday 10 February 2025 09:29:20 +0000 (0:00:00.756) 0:08:19.623 ******* 2025-02-10 09:29:26.085666 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.085673 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.085680 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.085686 | orchestrator | 2025-02-10 09:29:26.085693 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-02-10 09:29:26.085700 | orchestrator | Monday 10 February 2025 09:29:21 +0000 (0:00:00.705) 0:08:20.329 ******* 2025-02-10 09:29:26.085707 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.085714 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.085721 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.085728 | orchestrator | 2025-02-10 09:29:26.085735 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-02-10 09:29:26.085742 | orchestrator | Monday 10 February 2025 09:29:21 +0000 (0:00:00.399) 0:08:20.729 ******* 2025-02-10 09:29:26.085748 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:29:26.085755 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:29:26.085762 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:29:26.085769 | orchestrator | 2025-02-10 09:29:26.085776 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-02-10 09:29:26.085783 | orchestrator | Monday 10 February 2025 09:29:22 +0000 (0:00:01.526) 0:08:22.255 ******* 2025-02-10 09:29:26.085790 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:26.085797 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:26.085804 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:26.085811 | orchestrator | 2025-02-10 09:29:26.085818 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:29:26.085830 | orchestrator | testbed-node-0 : ok=85  changed=42  unreachable=0 failed=0 skipped=138  rescued=0 ignored=0 2025-02-10 09:29:26.085837 | orchestrator | testbed-node-1 : ok=84  changed=42  unreachable=0 failed=0 skipped=138  rescued=0 ignored=0 2025-02-10 09:29:26.085844 | orchestrator | testbed-node-2 : ok=84  changed=42  unreachable=0 failed=0 skipped=138  rescued=0 ignored=0 2025-02-10 09:29:26.085887 | orchestrator | 2025-02-10 09:29:26.085895 | orchestrator | 2025-02-10 09:29:26.085903 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:29:26.085910 | orchestrator | Monday 10 February 2025 09:29:24 +0000 (0:00:01.053) 0:08:23.309 ******* 2025-02-10 09:29:26.085921 | orchestrator | =============================================================================== 2025-02-10 09:29:26.085928 | orchestrator | haproxy-config : Copying over glance haproxy config -------------------- 14.21s 2025-02-10 09:29:26.085935 | orchestrator | loadbalancer : Start backup keepalived container ----------------------- 11.01s 2025-02-10 09:29:26.085942 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 9.47s 2025-02-10 09:29:26.085949 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 8.54s 2025-02-10 09:29:26.085956 | orchestrator | haproxy-config : Copying over ironic haproxy config --------------------- 8.51s 2025-02-10 09:29:26.085963 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 8.03s 2025-02-10 09:29:26.085970 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 7.92s 2025-02-10 09:29:26.085977 | orchestrator | haproxy-config : Copying over heat haproxy config ----------------------- 7.88s 2025-02-10 09:29:26.085984 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 7.46s 2025-02-10 09:29:26.085991 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 7.42s 2025-02-10 09:29:26.085997 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 7.29s 2025-02-10 09:29:26.086004 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 6.95s 2025-02-10 09:29:26.086011 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 6.35s 2025-02-10 09:29:26.086055 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 6.32s 2025-02-10 09:29:26.086063 | orchestrator | loadbalancer : Copying over keepalived.conf ----------------------------- 6.26s 2025-02-10 09:29:26.086070 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 6.14s 2025-02-10 09:29:26.086077 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.97s 2025-02-10 09:29:26.086084 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.92s 2025-02-10 09:29:26.086091 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 5.64s 2025-02-10 09:29:26.086098 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 5.56s 2025-02-10 09:29:26.086109 | orchestrator | 2025-02-10 09:29:26 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:29:29.104522 | orchestrator | 2025-02-10 09:29:26 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:29:29.104681 | orchestrator | 2025-02-10 09:29:26 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:29:29.104703 | orchestrator | 2025-02-10 09:29:26 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:29.104738 | orchestrator | 2025-02-10 09:29:29 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:29:29.105710 | orchestrator | 2025-02-10 09:29:29 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:29:29.106704 | orchestrator | 2025-02-10 09:29:29 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:29:29.106964 | orchestrator | 2025-02-10 09:29:29 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:32.150546 | orchestrator | 2025-02-10 09:29:32 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:29:32.150753 | orchestrator | 2025-02-10 09:29:32 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:29:32.150783 | orchestrator | 2025-02-10 09:29:32 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:29:35.189590 | orchestrator | 2025-02-10 09:29:32 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:35.189751 | orchestrator | 2025-02-10 09:29:35 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:29:35.190437 | orchestrator | 2025-02-10 09:29:35 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:29:35.190568 | orchestrator | 2025-02-10 09:29:35 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:29:38.231270 | orchestrator | 2025-02-10 09:29:35 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:38.231396 | orchestrator | 2025-02-10 09:29:38 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:29:38.233211 | orchestrator | 2025-02-10 09:29:38 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:29:38.235382 | orchestrator | 2025-02-10 09:29:38 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:29:41.277810 | orchestrator | 2025-02-10 09:29:38 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:41.278133 | orchestrator | 2025-02-10 09:29:41 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:29:41.281291 | orchestrator | 2025-02-10 09:29:41 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:29:41.281851 | orchestrator | 2025-02-10 09:29:41 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:29:44.333577 | orchestrator | 2025-02-10 09:29:41 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:44.333738 | orchestrator | 2025-02-10 09:29:44 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:29:44.334677 | orchestrator | 2025-02-10 09:29:44 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:29:44.334735 | orchestrator | 2025-02-10 09:29:44 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:29:47.382682 | orchestrator | 2025-02-10 09:29:44 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:47.382844 | orchestrator | 2025-02-10 09:29:47 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:29:47.383809 | orchestrator | 2025-02-10 09:29:47 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:29:47.385259 | orchestrator | 2025-02-10 09:29:47 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:29:50.442675 | orchestrator | 2025-02-10 09:29:47 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:50.442829 | orchestrator | 2025-02-10 09:29:50 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:29:50.444282 | orchestrator | 2025-02-10 09:29:50 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:29:50.444356 | orchestrator | 2025-02-10 09:29:50 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:29:53.512155 | orchestrator | 2025-02-10 09:29:50 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:53.512291 | orchestrator | 2025-02-10 09:29:53 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:29:53.512544 | orchestrator | 2025-02-10 09:29:53 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:29:53.513759 | orchestrator | 2025-02-10 09:29:53 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:29:56.563947 | orchestrator | 2025-02-10 09:29:53 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:56.564144 | orchestrator | 2025-02-10 09:29:56 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:29:56.565056 | orchestrator | 2025-02-10 09:29:56 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:29:56.565166 | orchestrator | 2025-02-10 09:29:56 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:29:59.611213 | orchestrator | 2025-02-10 09:29:56 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:59.611430 | orchestrator | 2025-02-10 09:29:59 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:29:59.614200 | orchestrator | 2025-02-10 09:29:59 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:29:59.614256 | orchestrator | 2025-02-10 09:29:59 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:30:02.667944 | orchestrator | 2025-02-10 09:29:59 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:30:02.668110 | orchestrator | 2025-02-10 09:30:02 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:30:02.670382 | orchestrator | 2025-02-10 09:30:02 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:30:02.676364 | orchestrator | 2025-02-10 09:30:02 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:30:05.724155 | orchestrator | 2025-02-10 09:30:02 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:30:05.724320 | orchestrator | 2025-02-10 09:30:05 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:30:05.729198 | orchestrator | 2025-02-10 09:30:05 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:30:05.729255 | orchestrator | 2025-02-10 09:30:05 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:30:08.770336 | orchestrator | 2025-02-10 09:30:05 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:30:08.770496 | orchestrator | 2025-02-10 09:30:08 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:30:08.771945 | orchestrator | 2025-02-10 09:30:08 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:30:08.774552 | orchestrator | 2025-02-10 09:30:08 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:30:11.826253 | orchestrator | 2025-02-10 09:30:08 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:30:11.826407 | orchestrator | 2025-02-10 09:30:11 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:30:11.827742 | orchestrator | 2025-02-10 09:30:11 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:30:11.827783 | orchestrator | 2025-02-10 09:30:11 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:30:14.870283 | orchestrator | 2025-02-10 09:30:11 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:30:14.870549 | orchestrator | 2025-02-10 09:30:14 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:30:14.872208 | orchestrator | 2025-02-10 09:30:14 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:30:14.872246 | orchestrator | 2025-02-10 09:30:14 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:30:17.914122 | orchestrator | 2025-02-10 09:30:14 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:30:17.914284 | orchestrator | 2025-02-10 09:30:17 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:30:17.915367 | orchestrator | 2025-02-10 09:30:17 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:30:17.915420 | orchestrator | 2025-02-10 09:30:17 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:30:20.972717 | orchestrator | 2025-02-10 09:30:17 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:30:20.972944 | orchestrator | 2025-02-10 09:30:20 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:30:20.975672 | orchestrator | 2025-02-10 09:30:20 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:30:24.031506 | orchestrator | 2025-02-10 09:30:20 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:30:24.031654 | orchestrator | 2025-02-10 09:30:20 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:30:24.031690 | orchestrator | 2025-02-10 09:30:24 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:30:24.036810 | orchestrator | 2025-02-10 09:30:24 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:30:27.082697 | orchestrator | 2025-02-10 09:30:24 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:30:27.082866 | orchestrator | 2025-02-10 09:30:24 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:30:27.082958 | orchestrator | 2025-02-10 09:30:27 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:30:27.083570 | orchestrator | 2025-02-10 09:30:27 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:30:27.083632 | orchestrator | 2025-02-10 09:30:27 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:30:30.124966 | orchestrator | 2025-02-10 09:30:27 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:30:30.125124 | orchestrator | 2025-02-10 09:30:30 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:30:30.125995 | orchestrator | 2025-02-10 09:30:30 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:30:30.126085 | orchestrator | 2025-02-10 09:30:30 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:30:33.169372 | orchestrator | 2025-02-10 09:30:30 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:30:33.169515 | orchestrator | 2025-02-10 09:30:33 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:30:33.170467 | orchestrator | 2025-02-10 09:30:33 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:30:33.174968 | orchestrator | 2025-02-10 09:30:33 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:30:36.221486 | orchestrator | 2025-02-10 09:30:33 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:30:36.221681 | orchestrator | 2025-02-10 09:30:36 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:30:36.222422 | orchestrator | 2025-02-10 09:30:36 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:30:36.224022 | orchestrator | 2025-02-10 09:30:36 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:30:39.276282 | orchestrator | 2025-02-10 09:30:36 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:30:39.276466 | orchestrator | 2025-02-10 09:30:39 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:30:39.281032 | orchestrator | 2025-02-10 09:30:39 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:30:39.285033 | orchestrator | 2025-02-10 09:30:39 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:30:42.336155 | orchestrator | 2025-02-10 09:30:39 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:30:42.336307 | orchestrator | 2025-02-10 09:30:42 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:30:45.381821 | orchestrator | 2025-02-10 09:30:42 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:30:45.381971 | orchestrator | 2025-02-10 09:30:42 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:30:45.381984 | orchestrator | 2025-02-10 09:30:42 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:30:45.382007 | orchestrator | 2025-02-10 09:30:45 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:30:45.383523 | orchestrator | 2025-02-10 09:30:45 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:30:45.385491 | orchestrator | 2025-02-10 09:30:45 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:30:48.438112 | orchestrator | 2025-02-10 09:30:45 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:30:48.438252 | orchestrator | 2025-02-10 09:30:48 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:30:48.442176 | orchestrator | 2025-02-10 09:30:48 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:30:48.447476 | orchestrator | 2025-02-10 09:30:48 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:30:51.499136 | orchestrator | 2025-02-10 09:30:48 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:30:51.499337 | orchestrator | 2025-02-10 09:30:51 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:30:51.500255 | orchestrator | 2025-02-10 09:30:51 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:30:51.501622 | orchestrator | 2025-02-10 09:30:51 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:30:54.551417 | orchestrator | 2025-02-10 09:30:51 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:30:54.551602 | orchestrator | 2025-02-10 09:30:54 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:30:54.552665 | orchestrator | 2025-02-10 09:30:54 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:30:54.552710 | orchestrator | 2025-02-10 09:30:54 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:30:57.599554 | orchestrator | 2025-02-10 09:30:54 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:30:57.599733 | orchestrator | 2025-02-10 09:30:57 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:30:57.600783 | orchestrator | 2025-02-10 09:30:57 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:30:57.602278 | orchestrator | 2025-02-10 09:30:57 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:30:57.602367 | orchestrator | 2025-02-10 09:30:57 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:00.656771 | orchestrator | 2025-02-10 09:31:00 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:31:00.661373 | orchestrator | 2025-02-10 09:31:00 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:31:00.670202 | orchestrator | 2025-02-10 09:31:00 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:31:03.719430 | orchestrator | 2025-02-10 09:31:00 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:03.719593 | orchestrator | 2025-02-10 09:31:03 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:31:03.726583 | orchestrator | 2025-02-10 09:31:03 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:31:03.729398 | orchestrator | 2025-02-10 09:31:03 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:31:06.783325 | orchestrator | 2025-02-10 09:31:03 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:06.783536 | orchestrator | 2025-02-10 09:31:06 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:31:09.812822 | orchestrator | 2025-02-10 09:31:06 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:31:09.813133 | orchestrator | 2025-02-10 09:31:06 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:31:09.813165 | orchestrator | 2025-02-10 09:31:06 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:09.813200 | orchestrator | 2025-02-10 09:31:09 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:31:09.814583 | orchestrator | 2025-02-10 09:31:09 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:31:09.814640 | orchestrator | 2025-02-10 09:31:09 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:31:12.858441 | orchestrator | 2025-02-10 09:31:09 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:12.858612 | orchestrator | 2025-02-10 09:31:12 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:31:12.859740 | orchestrator | 2025-02-10 09:31:12 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:31:12.859839 | orchestrator | 2025-02-10 09:31:12 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:31:15.904011 | orchestrator | 2025-02-10 09:31:12 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:15.904131 | orchestrator | 2025-02-10 09:31:15 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:31:15.904437 | orchestrator | 2025-02-10 09:31:15 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:31:15.905480 | orchestrator | 2025-02-10 09:31:15 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:31:18.956742 | orchestrator | 2025-02-10 09:31:15 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:18.956926 | orchestrator | 2025-02-10 09:31:18 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:31:18.958136 | orchestrator | 2025-02-10 09:31:18 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:31:18.959334 | orchestrator | 2025-02-10 09:31:18 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:31:18.960069 | orchestrator | 2025-02-10 09:31:18 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:22.002387 | orchestrator | 2025-02-10 09:31:22 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:31:22.006414 | orchestrator | 2025-02-10 09:31:22 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:31:22.007671 | orchestrator | 2025-02-10 09:31:22 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:31:25.071843 | orchestrator | 2025-02-10 09:31:22 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:25.072007 | orchestrator | 2025-02-10 09:31:25 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:31:25.073753 | orchestrator | 2025-02-10 09:31:25 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:31:25.073856 | orchestrator | 2025-02-10 09:31:25 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:31:28.128039 | orchestrator | 2025-02-10 09:31:25 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:28.128193 | orchestrator | 2025-02-10 09:31:28 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:31:28.131190 | orchestrator | 2025-02-10 09:31:28 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:31:28.132691 | orchestrator | 2025-02-10 09:31:28 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:31:31.176130 | orchestrator | 2025-02-10 09:31:28 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:31.176314 | orchestrator | 2025-02-10 09:31:31 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:31:31.178722 | orchestrator | 2025-02-10 09:31:31 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:31:31.183314 | orchestrator | 2025-02-10 09:31:31 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state STARTED 2025-02-10 09:31:34.225331 | orchestrator | 2025-02-10 09:31:31 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:34.225540 | orchestrator | 2025-02-10 09:31:34 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:31:34.228243 | orchestrator | 2025-02-10 09:31:34 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:31:34.228304 | orchestrator | 2025-02-10 09:31:34 | INFO  | Task 0842973f-4532-454e-8211-cb25720f1132 is in state SUCCESS 2025-02-10 09:31:34.228334 | orchestrator | 2025-02-10 09:31:34.228346 | orchestrator | 2025-02-10 09:31:34.228357 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:31:34.228367 | orchestrator | 2025-02-10 09:31:34.228378 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:31:34.228388 | orchestrator | Monday 10 February 2025 09:29:28 +0000 (0:00:00.342) 0:00:00.342 ******* 2025-02-10 09:31:34.228399 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:34.228411 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:34.228422 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:34.228432 | orchestrator | 2025-02-10 09:31:34.228442 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:31:34.228452 | orchestrator | Monday 10 February 2025 09:29:28 +0000 (0:00:00.436) 0:00:00.778 ******* 2025-02-10 09:31:34.228463 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-02-10 09:31:34.228473 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-02-10 09:31:34.228526 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-02-10 09:31:34.228537 | orchestrator | 2025-02-10 09:31:34.228547 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-02-10 09:31:34.228557 | orchestrator | 2025-02-10 09:31:34.228567 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-02-10 09:31:34.228577 | orchestrator | Monday 10 February 2025 09:29:29 +0000 (0:00:00.397) 0:00:01.176 ******* 2025-02-10 09:31:34.228588 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:31:34.228597 | orchestrator | 2025-02-10 09:31:34.228607 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-02-10 09:31:34.228617 | orchestrator | Monday 10 February 2025 09:29:30 +0000 (0:00:00.827) 0:00:02.003 ******* 2025-02-10 09:31:34.228627 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-02-10 09:31:34.228643 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-02-10 09:31:34.228654 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-02-10 09:31:34.228664 | orchestrator | 2025-02-10 09:31:34.228674 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-02-10 09:31:34.228684 | orchestrator | Monday 10 February 2025 09:29:32 +0000 (0:00:01.889) 0:00:03.893 ******* 2025-02-10 09:31:34.228697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-10 09:31:34.228713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-10 09:31:34.228734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-10 09:31:34.228763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-10 09:31:34.228775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-10 09:31:34.228787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-10 09:31:34.228797 | orchestrator | 2025-02-10 09:31:34.228808 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-02-10 09:31:34.228818 | orchestrator | Monday 10 February 2025 09:29:33 +0000 (0:00:01.851) 0:00:05.744 ******* 2025-02-10 09:31:34.228828 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:31:34.228839 | orchestrator | 2025-02-10 09:31:34.228851 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-02-10 09:31:34.228863 | orchestrator | Monday 10 February 2025 09:29:35 +0000 (0:00:01.284) 0:00:07.028 ******* 2025-02-10 09:31:34.228888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-10 09:31:34.228956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-10 09:31:34.228970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-10 09:31:34.228982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-10 09:31:34.229002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-10 09:31:34.229021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-10 09:31:34.229033 | orchestrator | 2025-02-10 09:31:34.229044 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-02-10 09:31:34.229056 | orchestrator | Monday 10 February 2025 09:29:38 +0000 (0:00:03.693) 0:00:10.721 ******* 2025-02-10 09:31:34.229068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-02-10 09:31:34.229081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-02-10 09:31:34.229099 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:34.229118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-02-10 09:31:34.229130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-02-10 09:31:34.229142 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:34.229153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-02-10 09:31:34.229165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-02-10 09:31:34.229183 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:34.229195 | orchestrator | 2025-02-10 09:31:34.229205 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-02-10 09:31:34.229216 | orchestrator | Monday 10 February 2025 09:29:40 +0000 (0:00:01.471) 0:00:12.192 ******* 2025-02-10 09:31:34.229233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-02-10 09:31:34.229245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-02-10 09:31:34.229255 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:34.229266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-02-10 09:31:34.229277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-02-10 09:31:34.229297 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:34.229313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-02-10 09:31:34.229324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-02-10 09:31:34.229335 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:34.229345 | orchestrator | 2025-02-10 09:31:34.229355 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-02-10 09:31:34.229365 | orchestrator | Monday 10 February 2025 09:29:42 +0000 (0:00:01.795) 0:00:13.988 ******* 2025-02-10 09:31:34.229376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-10 09:31:34.229386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-10 09:31:34.229407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-10 09:31:34.229419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-10 09:31:34.229430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-10 09:31:34.229441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-10 09:31:34.229456 | orchestrator | 2025-02-10 09:31:34.229467 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-02-10 09:31:34.229477 | orchestrator | Monday 10 February 2025 09:29:46 +0000 (0:00:03.933) 0:00:17.922 ******* 2025-02-10 09:31:34.229487 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:31:34.229497 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:34.229507 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:31:34.229517 | orchestrator | 2025-02-10 09:31:34.229527 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-02-10 09:31:34.229536 | orchestrator | Monday 10 February 2025 09:29:49 +0000 (0:00:03.342) 0:00:21.264 ******* 2025-02-10 09:31:34.229546 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:34.229556 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:31:34.229566 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:31:34.229576 | orchestrator | 2025-02-10 09:31:34.229586 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-02-10 09:31:34.229595 | orchestrator | Monday 10 February 2025 09:29:51 +0000 (0:00:02.610) 0:00:23.875 ******* 2025-02-10 09:31:34.229617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-10 09:31:34.229636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-10 09:31:34.229655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-10 09:31:34.229682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-10 09:31:34.229710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-10 09:31:34.229731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-10 09:31:34.229743 | orchestrator | 2025-02-10 09:31:34.229753 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-02-10 09:31:34.229764 | orchestrator | Monday 10 February 2025 09:29:55 +0000 (0:00:03.079) 0:00:26.954 ******* 2025-02-10 09:31:34.229774 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:34.229790 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:34.229800 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:34.229810 | orchestrator | 2025-02-10 09:31:34.229826 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-02-10 09:31:34.229837 | orchestrator | Monday 10 February 2025 09:29:55 +0000 (0:00:00.437) 0:00:27.391 ******* 2025-02-10 09:31:34.229847 | orchestrator | 2025-02-10 09:31:34.229857 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-02-10 09:31:34.229867 | orchestrator | Monday 10 February 2025 09:29:55 +0000 (0:00:00.266) 0:00:27.657 ******* 2025-02-10 09:31:34.229877 | orchestrator | 2025-02-10 09:31:34.229887 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-02-10 09:31:34.229915 | orchestrator | Monday 10 February 2025 09:29:55 +0000 (0:00:00.066) 0:00:27.724 ******* 2025-02-10 09:31:34.229929 | orchestrator | 2025-02-10 09:31:34.229939 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-02-10 09:31:34.229949 | orchestrator | Monday 10 February 2025 09:29:55 +0000 (0:00:00.069) 0:00:27.793 ******* 2025-02-10 09:31:34.229958 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:34.229968 | orchestrator | 2025-02-10 09:31:34.229978 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-02-10 09:31:34.229988 | orchestrator | Monday 10 February 2025 09:29:56 +0000 (0:00:00.354) 0:00:28.147 ******* 2025-02-10 09:31:34.229998 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:34.230008 | orchestrator | 2025-02-10 09:31:34.230070 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-02-10 09:31:34.230081 | orchestrator | Monday 10 February 2025 09:29:56 +0000 (0:00:00.598) 0:00:28.745 ******* 2025-02-10 09:31:34.230091 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:34.230101 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:31:34.230111 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:31:34.230121 | orchestrator | 2025-02-10 09:31:34.230130 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-02-10 09:31:34.230140 | orchestrator | Monday 10 February 2025 09:30:23 +0000 (0:00:26.902) 0:00:55.647 ******* 2025-02-10 09:31:34.230150 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:34.230160 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:31:34.230170 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:31:34.230180 | orchestrator | 2025-02-10 09:31:34.230190 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-02-10 09:31:34.230200 | orchestrator | Monday 10 February 2025 09:31:20 +0000 (0:00:56.426) 0:01:52.074 ******* 2025-02-10 09:31:34.230210 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:31:34.230220 | orchestrator | 2025-02-10 09:31:34.230230 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-02-10 09:31:34.230239 | orchestrator | Monday 10 February 2025 09:31:21 +0000 (0:00:00.906) 0:01:52.980 ******* 2025-02-10 09:31:34.230249 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:34.230260 | orchestrator | 2025-02-10 09:31:34.230270 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-02-10 09:31:34.230279 | orchestrator | Monday 10 February 2025 09:31:24 +0000 (0:00:03.017) 0:01:55.998 ******* 2025-02-10 09:31:34.230289 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:34.230299 | orchestrator | 2025-02-10 09:31:34.230309 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-02-10 09:31:34.230325 | orchestrator | Monday 10 February 2025 09:31:26 +0000 (0:00:02.802) 0:01:58.800 ******* 2025-02-10 09:31:37.275428 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:37.275549 | orchestrator | 2025-02-10 09:31:37.275561 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-02-10 09:31:37.275573 | orchestrator | Monday 10 February 2025 09:31:30 +0000 (0:00:03.299) 0:02:02.100 ******* 2025-02-10 09:31:37.275582 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:37.275592 | orchestrator | 2025-02-10 09:31:37.275601 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:31:37.275610 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-10 09:31:37.275656 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-10 09:31:37.275664 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-10 09:31:37.275673 | orchestrator | 2025-02-10 09:31:37.275682 | orchestrator | 2025-02-10 09:31:37.275691 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:31:37.275700 | orchestrator | Monday 10 February 2025 09:31:33 +0000 (0:00:03.300) 0:02:05.401 ******* 2025-02-10 09:31:37.275709 | orchestrator | =============================================================================== 2025-02-10 09:31:37.275718 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 56.43s 2025-02-10 09:31:37.275727 | orchestrator | opensearch : Restart opensearch container ------------------------------ 26.90s 2025-02-10 09:31:37.275736 | orchestrator | opensearch : Copying over config.json files for services ---------------- 3.93s 2025-02-10 09:31:37.275745 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.69s 2025-02-10 09:31:37.275754 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.34s 2025-02-10 09:31:37.275762 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 3.30s 2025-02-10 09:31:37.275771 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.30s 2025-02-10 09:31:37.275780 | orchestrator | opensearch : Check opensearch containers -------------------------------- 3.08s 2025-02-10 09:31:37.275788 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 3.02s 2025-02-10 09:31:37.275796 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.80s 2025-02-10 09:31:37.275803 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.61s 2025-02-10 09:31:37.275811 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.89s 2025-02-10 09:31:37.275819 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.85s 2025-02-10 09:31:37.275842 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.80s 2025-02-10 09:31:37.275852 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.47s 2025-02-10 09:31:37.275862 | orchestrator | opensearch : include_tasks ---------------------------------------------- 1.28s 2025-02-10 09:31:37.275871 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.91s 2025-02-10 09:31:37.275880 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.83s 2025-02-10 09:31:37.275888 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.60s 2025-02-10 09:31:37.275930 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.44s 2025-02-10 09:31:37.275940 | orchestrator | 2025-02-10 09:31:34 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:37.275965 | orchestrator | 2025-02-10 09:31:37 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:31:37.276282 | orchestrator | 2025-02-10 09:31:37 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:31:40.331529 | orchestrator | 2025-02-10 09:31:37 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:40.331679 | orchestrator | 2025-02-10 09:31:40 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:31:40.332400 | orchestrator | 2025-02-10 09:31:40 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:31:43.389588 | orchestrator | 2025-02-10 09:31:40 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:43.389854 | orchestrator | 2025-02-10 09:31:43 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:31:46.431510 | orchestrator | 2025-02-10 09:31:43 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:31:46.431722 | orchestrator | 2025-02-10 09:31:43 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:46.431759 | orchestrator | 2025-02-10 09:31:46 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:31:49.491147 | orchestrator | 2025-02-10 09:31:46 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:31:49.491271 | orchestrator | 2025-02-10 09:31:46 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:49.491303 | orchestrator | 2025-02-10 09:31:49 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:31:49.497271 | orchestrator | 2025-02-10 09:31:49 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:31:52.547584 | orchestrator | 2025-02-10 09:31:49 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:52.547727 | orchestrator | 2025-02-10 09:31:52 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:31:55.592322 | orchestrator | 2025-02-10 09:31:52 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:31:55.592455 | orchestrator | 2025-02-10 09:31:52 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:55.592492 | orchestrator | 2025-02-10 09:31:55 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:31:58.644541 | orchestrator | 2025-02-10 09:31:55 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:31:58.644654 | orchestrator | 2025-02-10 09:31:55 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:58.644682 | orchestrator | 2025-02-10 09:31:58 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:31:58.645860 | orchestrator | 2025-02-10 09:31:58 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:32:01.700266 | orchestrator | 2025-02-10 09:31:58 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:01.700384 | orchestrator | 2025-02-10 09:32:01 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:32:01.701548 | orchestrator | 2025-02-10 09:32:01 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:32:01.701900 | orchestrator | 2025-02-10 09:32:01 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:04.768180 | orchestrator | 2025-02-10 09:32:04 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:32:04.768934 | orchestrator | 2025-02-10 09:32:04 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:32:07.828534 | orchestrator | 2025-02-10 09:32:04 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:07.828706 | orchestrator | 2025-02-10 09:32:07 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:32:07.829776 | orchestrator | 2025-02-10 09:32:07 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:32:10.898007 | orchestrator | 2025-02-10 09:32:07 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:10.898170 | orchestrator | 2025-02-10 09:32:10 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:32:10.899559 | orchestrator | 2025-02-10 09:32:10 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:32:13.942975 | orchestrator | 2025-02-10 09:32:10 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:13.943166 | orchestrator | 2025-02-10 09:32:13 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:32:13.943602 | orchestrator | 2025-02-10 09:32:13 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:32:13.943637 | orchestrator | 2025-02-10 09:32:13 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:16.993589 | orchestrator | 2025-02-10 09:32:16 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:32:16.994561 | orchestrator | 2025-02-10 09:32:16 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:32:20.046647 | orchestrator | 2025-02-10 09:32:16 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:20.046802 | orchestrator | 2025-02-10 09:32:20 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:32:20.047793 | orchestrator | 2025-02-10 09:32:20 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:32:23.087614 | orchestrator | 2025-02-10 09:32:20 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:23.087784 | orchestrator | 2025-02-10 09:32:23 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:32:26.130532 | orchestrator | 2025-02-10 09:32:23 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:32:26.130665 | orchestrator | 2025-02-10 09:32:23 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:26.130700 | orchestrator | 2025-02-10 09:32:26 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:32:26.131235 | orchestrator | 2025-02-10 09:32:26 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:32:29.178625 | orchestrator | 2025-02-10 09:32:26 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:29.178817 | orchestrator | 2025-02-10 09:32:29 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:32:29.180218 | orchestrator | 2025-02-10 09:32:29 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:32:32.225191 | orchestrator | 2025-02-10 09:32:29 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:32.225309 | orchestrator | 2025-02-10 09:32:32 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:32:35.270487 | orchestrator | 2025-02-10 09:32:32 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:32:35.270638 | orchestrator | 2025-02-10 09:32:32 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:35.270677 | orchestrator | 2025-02-10 09:32:35 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:32:38.323867 | orchestrator | 2025-02-10 09:32:35 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:32:38.324076 | orchestrator | 2025-02-10 09:32:35 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:38.324120 | orchestrator | 2025-02-10 09:32:38 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:32:41.375998 | orchestrator | 2025-02-10 09:32:38 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:32:41.376144 | orchestrator | 2025-02-10 09:32:38 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:41.376183 | orchestrator | 2025-02-10 09:32:41 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:32:41.377047 | orchestrator | 2025-02-10 09:32:41 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:32:44.425361 | orchestrator | 2025-02-10 09:32:41 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:44.425639 | orchestrator | 2025-02-10 09:32:44 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:32:47.462754 | orchestrator | 2025-02-10 09:32:44 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:32:47.462922 | orchestrator | 2025-02-10 09:32:44 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:47.463018 | orchestrator | 2025-02-10 09:32:47 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:32:50.514274 | orchestrator | 2025-02-10 09:32:47 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:32:50.514451 | orchestrator | 2025-02-10 09:32:47 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:50.514493 | orchestrator | 2025-02-10 09:32:50 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:32:53.565427 | orchestrator | 2025-02-10 09:32:50 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:32:53.565601 | orchestrator | 2025-02-10 09:32:50 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:53.565640 | orchestrator | 2025-02-10 09:32:53 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:32:56.607113 | orchestrator | 2025-02-10 09:32:53 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:32:56.607235 | orchestrator | 2025-02-10 09:32:53 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:56.607265 | orchestrator | 2025-02-10 09:32:56 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:32:59.654832 | orchestrator | 2025-02-10 09:32:56 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:32:59.655023 | orchestrator | 2025-02-10 09:32:56 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:59.655080 | orchestrator | 2025-02-10 09:32:59 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:32:59.655434 | orchestrator | 2025-02-10 09:32:59 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state STARTED 2025-02-10 09:32:59.655912 | orchestrator | 2025-02-10 09:32:59 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:33:02.700344 | orchestrator | 2025-02-10 09:33:02 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:33:02.707085 | orchestrator | 2025-02-10 09:33:02 | INFO  | Task ade4e3c5-3c23-406b-b37c-ccf1b999f9a5 is in state SUCCESS 2025-02-10 09:33:02.708750 | orchestrator | 2025-02-10 09:33:02.708811 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-02-10 09:33:02.708829 | orchestrator | 2025-02-10 09:33:02.708843 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-02-10 09:33:02.708858 | orchestrator | 2025-02-10 09:33:02.708873 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-02-10 09:33:02.708887 | orchestrator | Monday 10 February 2025 09:18:30 +0000 (0:00:01.989) 0:00:01.989 ******* 2025-02-10 09:33:02.708902 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:33:02.708918 | orchestrator | 2025-02-10 09:33:02.708957 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-02-10 09:33:02.708974 | orchestrator | Monday 10 February 2025 09:18:31 +0000 (0:00:01.483) 0:00:03.472 ******* 2025-02-10 09:33:02.709019 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-02-10 09:33:02.709034 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-02-10 09:33:02.709354 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-02-10 09:33:02.709378 | orchestrator | 2025-02-10 09:33:02.709393 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-02-10 09:33:02.709407 | orchestrator | Monday 10 February 2025 09:18:32 +0000 (0:00:00.947) 0:00:04.419 ******* 2025-02-10 09:33:02.709422 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:33:02.709437 | orchestrator | 2025-02-10 09:33:02.709451 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-02-10 09:33:02.709508 | orchestrator | Monday 10 February 2025 09:18:33 +0000 (0:00:01.147) 0:00:05.566 ******* 2025-02-10 09:33:02.709534 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.709557 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.709581 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.709710 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.709742 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.709766 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.709789 | orchestrator | 2025-02-10 09:33:02.710385 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-02-10 09:33:02.710406 | orchestrator | Monday 10 February 2025 09:18:35 +0000 (0:00:01.396) 0:00:06.962 ******* 2025-02-10 09:33:02.710420 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.710434 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.710448 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.710463 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.710477 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.710491 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.710505 | orchestrator | 2025-02-10 09:33:02.710519 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-02-10 09:33:02.710533 | orchestrator | Monday 10 February 2025 09:18:36 +0000 (0:00:00.931) 0:00:07.894 ******* 2025-02-10 09:33:02.710547 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.710771 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.710790 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.710803 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.710817 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.710832 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.710846 | orchestrator | 2025-02-10 09:33:02.710860 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-02-10 09:33:02.710874 | orchestrator | Monday 10 February 2025 09:18:37 +0000 (0:00:01.122) 0:00:09.016 ******* 2025-02-10 09:33:02.710888 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.710902 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.710917 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.710953 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.710970 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.710985 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.711001 | orchestrator | 2025-02-10 09:33:02.711018 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-02-10 09:33:02.711034 | orchestrator | Monday 10 February 2025 09:18:38 +0000 (0:00:01.401) 0:00:10.417 ******* 2025-02-10 09:33:02.711051 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.711067 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.711083 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.711099 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.711115 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.711132 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.711147 | orchestrator | 2025-02-10 09:33:02.711164 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-02-10 09:33:02.711180 | orchestrator | Monday 10 February 2025 09:18:39 +0000 (0:00:00.877) 0:00:11.295 ******* 2025-02-10 09:33:02.711197 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.711231 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.711247 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.711263 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.711279 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.711294 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.711310 | orchestrator | 2025-02-10 09:33:02.711326 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-02-10 09:33:02.711343 | orchestrator | Monday 10 February 2025 09:18:40 +0000 (0:00:01.194) 0:00:12.489 ******* 2025-02-10 09:33:02.711357 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.711373 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.711408 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.711423 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.711437 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.711451 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.711464 | orchestrator | 2025-02-10 09:33:02.711479 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-02-10 09:33:02.711493 | orchestrator | Monday 10 February 2025 09:18:41 +0000 (0:00:00.753) 0:00:13.243 ******* 2025-02-10 09:33:02.711506 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.711520 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.711534 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.711547 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.711561 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.711574 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.711588 | orchestrator | 2025-02-10 09:33:02.712019 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-02-10 09:33:02.712062 | orchestrator | Monday 10 February 2025 09:18:42 +0000 (0:00:01.066) 0:00:14.310 ******* 2025-02-10 09:33:02.712078 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-02-10 09:33:02.712093 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-10 09:33:02.712107 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-10 09:33:02.712121 | orchestrator | 2025-02-10 09:33:02.712135 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-02-10 09:33:02.712149 | orchestrator | Monday 10 February 2025 09:18:43 +0000 (0:00:00.858) 0:00:15.168 ******* 2025-02-10 09:33:02.712163 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.712177 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.712190 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.712210 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.712232 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.712257 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.712281 | orchestrator | 2025-02-10 09:33:02.712300 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-02-10 09:33:02.712314 | orchestrator | Monday 10 February 2025 09:18:45 +0000 (0:00:01.945) 0:00:17.113 ******* 2025-02-10 09:33:02.712328 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-02-10 09:33:02.712342 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-10 09:33:02.712356 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-10 09:33:02.712370 | orchestrator | 2025-02-10 09:33:02.712383 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-02-10 09:33:02.712397 | orchestrator | Monday 10 February 2025 09:18:49 +0000 (0:00:04.085) 0:00:21.199 ******* 2025-02-10 09:33:02.712411 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-10 09:33:02.712849 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-10 09:33:02.712875 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-10 09:33:02.712892 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.712908 | orchestrator | 2025-02-10 09:33:02.712922 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-02-10 09:33:02.712975 | orchestrator | Monday 10 February 2025 09:18:50 +0000 (0:00:00.830) 0:00:22.029 ******* 2025-02-10 09:33:02.713021 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-02-10 09:33:02.713442 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-02-10 09:33:02.713461 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-02-10 09:33:02.713475 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.713490 | orchestrator | 2025-02-10 09:33:02.713504 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-02-10 09:33:02.713518 | orchestrator | Monday 10 February 2025 09:18:51 +0000 (0:00:01.650) 0:00:23.680 ******* 2025-02-10 09:33:02.713561 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-02-10 09:33:02.713579 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-02-10 09:33:02.713594 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-02-10 09:33:02.713608 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.713622 | orchestrator | 2025-02-10 09:33:02.713636 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-02-10 09:33:02.713825 | orchestrator | Monday 10 February 2025 09:18:52 +0000 (0:00:00.307) 0:00:23.987 ******* 2025-02-10 09:33:02.713857 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-02-10 09:18:46.439068', 'end': '2025-02-10 09:18:46.718392', 'delta': '0:00:00.279324', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-02-10 09:33:02.713876 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-02-10 09:18:47.755732', 'end': '2025-02-10 09:18:48.074372', 'delta': '0:00:00.318640', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-02-10 09:33:02.713905 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-02-10 09:18:48.790521', 'end': '2025-02-10 09:18:49.080767', 'delta': '0:00:00.290246', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-02-10 09:33:02.713921 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.714372 | orchestrator | 2025-02-10 09:33:02.714400 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-02-10 09:33:02.714415 | orchestrator | Monday 10 February 2025 09:18:52 +0000 (0:00:00.348) 0:00:24.336 ******* 2025-02-10 09:33:02.714429 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.714443 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.714457 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.714470 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.714524 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.714540 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.714554 | orchestrator | 2025-02-10 09:33:02.714569 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-02-10 09:33:02.714583 | orchestrator | Monday 10 February 2025 09:18:55 +0000 (0:00:03.224) 0:00:27.561 ******* 2025-02-10 09:33:02.714597 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.714926 | orchestrator | 2025-02-10 09:33:02.715008 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-02-10 09:33:02.715023 | orchestrator | Monday 10 February 2025 09:18:57 +0000 (0:00:01.291) 0:00:28.854 ******* 2025-02-10 09:33:02.715037 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.715051 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.715065 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.715079 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.715093 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.715106 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.715120 | orchestrator | 2025-02-10 09:33:02.715134 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-02-10 09:33:02.715212 | orchestrator | Monday 10 February 2025 09:18:58 +0000 (0:00:01.320) 0:00:30.174 ******* 2025-02-10 09:33:02.715227 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.715239 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.715251 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.715263 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.715276 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.715288 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.715300 | orchestrator | 2025-02-10 09:33:02.715313 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-02-10 09:33:02.715325 | orchestrator | Monday 10 February 2025 09:18:59 +0000 (0:00:01.493) 0:00:31.668 ******* 2025-02-10 09:33:02.715337 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.715349 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.715361 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.715600 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.715619 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.715702 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.715716 | orchestrator | 2025-02-10 09:33:02.715728 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-02-10 09:33:02.715753 | orchestrator | Monday 10 February 2025 09:19:00 +0000 (0:00:01.133) 0:00:32.801 ******* 2025-02-10 09:33:02.715844 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.715862 | orchestrator | 2025-02-10 09:33:02.715875 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-02-10 09:33:02.715887 | orchestrator | Monday 10 February 2025 09:19:01 +0000 (0:00:00.187) 0:00:32.989 ******* 2025-02-10 09:33:02.715899 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.715912 | orchestrator | 2025-02-10 09:33:02.715925 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-02-10 09:33:02.716443 | orchestrator | Monday 10 February 2025 09:19:01 +0000 (0:00:00.276) 0:00:33.265 ******* 2025-02-10 09:33:02.716475 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.716488 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.716500 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.716518 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.716528 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.716538 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.716548 | orchestrator | 2025-02-10 09:33:02.716558 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-02-10 09:33:02.716568 | orchestrator | Monday 10 February 2025 09:19:02 +0000 (0:00:00.806) 0:00:34.071 ******* 2025-02-10 09:33:02.716579 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.716589 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.716599 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.716608 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.716618 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.716628 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.716638 | orchestrator | 2025-02-10 09:33:02.716648 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-02-10 09:33:02.716658 | orchestrator | Monday 10 February 2025 09:19:03 +0000 (0:00:01.153) 0:00:35.225 ******* 2025-02-10 09:33:02.716669 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.716678 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.716688 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.716698 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.716708 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.716718 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.716728 | orchestrator | 2025-02-10 09:33:02.716738 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-02-10 09:33:02.716748 | orchestrator | Monday 10 February 2025 09:19:04 +0000 (0:00:00.788) 0:00:36.013 ******* 2025-02-10 09:33:02.716758 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.716768 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.716778 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.716789 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.716799 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.716809 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.716819 | orchestrator | 2025-02-10 09:33:02.716829 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-02-10 09:33:02.716839 | orchestrator | Monday 10 February 2025 09:19:04 +0000 (0:00:00.722) 0:00:36.735 ******* 2025-02-10 09:33:02.716849 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.716859 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.716869 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.716879 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.716889 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.716899 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.716909 | orchestrator | 2025-02-10 09:33:02.716919 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-02-10 09:33:02.716929 | orchestrator | Monday 10 February 2025 09:19:05 +0000 (0:00:00.801) 0:00:37.536 ******* 2025-02-10 09:33:02.716962 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.716983 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.716994 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.717003 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.717013 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.717023 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.717033 | orchestrator | 2025-02-10 09:33:02.717045 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-02-10 09:33:02.717058 | orchestrator | Monday 10 February 2025 09:19:06 +0000 (0:00:00.847) 0:00:38.384 ******* 2025-02-10 09:33:02.717070 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.717081 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.717093 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.717104 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.717116 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.717126 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.717136 | orchestrator | 2025-02-10 09:33:02.717146 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-02-10 09:33:02.717157 | orchestrator | Monday 10 February 2025 09:19:07 +0000 (0:00:01.164) 0:00:39.549 ******* 2025-02-10 09:33:02.717167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.717180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.717267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.717284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.717295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.717307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.717324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.717342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.717412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c85cd12-85f6-4a1e-a24f-730d9e6d165f', 'scsi-SQEMU_QEMU_HARDDISK_4c85cd12-85f6-4a1e-a24f-730d9e6d165f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c85cd12-85f6-4a1e-a24f-730d9e6d165f-part1', 'scsi-SQEMU_QEMU_HARDDISK_4c85cd12-85f6-4a1e-a24f-730d9e6d165f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c85cd12-85f6-4a1e-a24f-730d9e6d165f-part14', 'scsi-SQEMU_QEMU_HARDDISK_4c85cd12-85f6-4a1e-a24f-730d9e6d165f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c85cd12-85f6-4a1e-a24f-730d9e6d165f-part15', 'scsi-SQEMU_QEMU_HARDDISK_4c85cd12-85f6-4a1e-a24f-730d9e6d165f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c85cd12-85f6-4a1e-a24f-730d9e6d165f-part16', 'scsi-SQEMU_QEMU_HARDDISK_4c85cd12-85f6-4a1e-a24f-730d9e6d165f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:02.717430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ec8f61f-9e5b-49cd-9e82-40bf07cffc70', 'scsi-SQEMU_QEMU_HARDDISK_4ec8f61f-9e5b-49cd-9e82-40bf07cffc70'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:02.717442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.717454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96415da1-6a76-4477-bfa7-f065f33f8e6a', 'scsi-SQEMU_QEMU_HARDDISK_96415da1-6a76-4477-bfa7-f065f33f8e6a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:02.717472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.717484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c1ae1e45-2170-46e6-8462-912ee8672daa', 'scsi-SQEMU_QEMU_HARDDISK_c1ae1e45-2170-46e6-8462-912ee8672daa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:02.717496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-02-10-08-33-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:02.717561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.717576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.717588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.717598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.717614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.717637 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.717648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.717714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0279abfd-66e1-4206-bc3d-37e10a9f78bb', 'scsi-SQEMU_QEMU_HARDDISK_0279abfd-66e1-4206-bc3d-37e10a9f78bb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0279abfd-66e1-4206-bc3d-37e10a9f78bb-part1', 'scsi-SQEMU_QEMU_HARDDISK_0279abfd-66e1-4206-bc3d-37e10a9f78bb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0279abfd-66e1-4206-bc3d-37e10a9f78bb-part14', 'scsi-SQEMU_QEMU_HARDDISK_0279abfd-66e1-4206-bc3d-37e10a9f78bb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0279abfd-66e1-4206-bc3d-37e10a9f78bb-part15', 'scsi-SQEMU_QEMU_HARDDISK_0279abfd-66e1-4206-bc3d-37e10a9f78bb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0279abfd-66e1-4206-bc3d-37e10a9f78bb-part16', 'scsi-SQEMU_QEMU_HARDDISK_0279abfd-66e1-4206-bc3d-37e10a9f78bb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:02.717732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3934c93-3cd2-4fec-bdf3-cbeea6813a64', 'scsi-SQEMU_QEMU_HARDDISK_c3934c93-3cd2-4fec-bdf3-cbeea6813a64'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:02.717744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d66bf247-6327-430d-be20-e0df09e5016f', 'scsi-SQEMU_QEMU_HARDDISK_d66bf247-6327-430d-be20-e0df09e5016f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:02.717765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91337675-2774-4bb7-b881-e3b3f642e46a', 'scsi-SQEMU_QEMU_HARDDISK_91337675-2774-4bb7-b881-e3b3f642e46a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:02.717778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-02-10-08-33-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:02.717789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.717818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.717829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.717895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.717909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.717920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.717955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.717972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.717984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_151fea73-21a0-4011-a292-7d2582f49900', 'scsi-SQEMU_QEMU_HARDDISK_151fea73-21a0-4011-a292-7d2582f49900'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_151fea73-21a0-4011-a292-7d2582f49900-part1', 'scsi-SQEMU_QEMU_HARDDISK_151fea73-21a0-4011-a292-7d2582f49900-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_151fea73-21a0-4011-a292-7d2582f49900-part14', 'scsi-SQEMU_QEMU_HARDDISK_151fea73-21a0-4011-a292-7d2582f49900-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_151fea73-21a0-4011-a292-7d2582f49900-part15', 'scsi-SQEMU_QEMU_HARDDISK_151fea73-21a0-4011-a292-7d2582f49900-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_151fea73-21a0-4011-a292-7d2582f49900-part16', 'scsi-SQEMU_QEMU_HARDDISK_151fea73-21a0-4011-a292-7d2582f49900-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:02.718053 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.718143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5ff65196-c1cf-41f3-a955-25be0154b459', 'scsi-SQEMU_QEMU_HARDDISK_5ff65196-c1cf-41f3-a955-25be0154b459'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:02.718160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_734cf6c7-c554-44f9-b9cd-702600de9593', 'scsi-SQEMU_QEMU_HARDDISK_734cf6c7-c554-44f9-b9cd-702600de9593'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:02.718180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_65f68ad4-1f17-45a3-95a2-0b9d82b524cb', 'scsi-SQEMU_QEMU_HARDDISK_65f68ad4-1f17-45a3-95a2-0b9d82b524cb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:02.718190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-02-10-08-33-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:02.718201 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--70e6c2b1--f69e--5685--9251--bc72a13d87ec-osd--block--70e6c2b1--f69e--5685--9251--bc72a13d87ec', 'dm-uuid-LVM-tRyDiHQo3Yjn1VzNOw3ugs1Wn82jeRSKRwC4KYrSgG2GnhExKIfb2XxWSKPReU0O'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.718212 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f3b4a615--299b--50bf--af8e--26b6dc38e729-osd--block--f3b4a615--299b--50bf--af8e--26b6dc38e729', 'dm-uuid-LVM-JHn31bh3nY2HLNSzW3dR8R9cUD0IgsMKo81TxTwb7lrqOvuyQDoSfAU0EqLYI9pE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.718223 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.718233 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.718294 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.718308 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.718325 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.718336 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.718347 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.718358 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.718369 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.718379 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5101bad7--da03--58be--8044--cbe4500fcec9-osd--block--5101bad7--da03--58be--8044--cbe4500fcec9', 'dm-uuid-LVM-kOxijTWfUjiF5H2iNDT8sQh68XR7izWhfpOIMTd85vAZEHMD75gm4lXR3FKeWE8Q'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.718440 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d59ecc87--3940--56cd--881a--fbc914ec02de-osd--block--d59ecc87--3940--56cd--881a--fbc914ec02de', 'dm-uuid-LVM-zmO3sPN2RjX9IesfI6WIJfNq2jkKzj9sOULR9HRtVKlTmR7lZamJbFkYqJEMAZJZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.718467 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8bdd934e-bb0e-4b1c-85b6-0f94f416d8b7', 'scsi-SQEMU_QEMU_HARDDISK_8bdd934e-bb0e-4b1c-85b6-0f94f416d8b7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8bdd934e-bb0e-4b1c-85b6-0f94f416d8b7-part1', 'scsi-SQEMU_QEMU_HARDDISK_8bdd934e-bb0e-4b1c-85b6-0f94f416d8b7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8bdd934e-bb0e-4b1c-85b6-0f94f416d8b7-part14', 'scsi-SQEMU_QEMU_HARDDISK_8bdd934e-bb0e-4b1c-85b6-0f94f416d8b7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8bdd934e-bb0e-4b1c-85b6-0f94f416d8b7-part15', 'scsi-SQEMU_QEMU_HARDDISK_8bdd934e-bb0e-4b1c-85b6-0f94f416d8b7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8bdd934e-bb0e-4b1c-85b6-0f94f416d8b7-part16', 'scsi-SQEMU_QEMU_HARDDISK_8bdd934e-bb0e-4b1c-85b6-0f94f416d8b7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:02.718511 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.718525 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--70e6c2b1--f69e--5685--9251--bc72a13d87ec-osd--block--70e6c2b1--f69e--5685--9251--bc72a13d87ec'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GRE1ta-6boF-QPTi-Jfmc-f78s-tRL3-IBBacy', 'scsi-0QEMU_QEMU_HARDDISK_094c1351-6c25-40a9-b10a-7f3d6a96f205', 'scsi-SQEMU_QEMU_HARDDISK_094c1351-6c25-40a9-b10a-7f3d6a96f205'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:02.718537 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.718604 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.718620 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f3b4a615--299b--50bf--af8e--26b6dc38e729-osd--block--f3b4a615--299b--50bf--af8e--26b6dc38e729'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-H3QuVZ-RPGB-y4GH-7c1v-8Blc-aucD-jXCBCR', 'scsi-0QEMU_QEMU_HARDDISK_494ee814-0dd9-4f0f-8082-b266e2c53997', 'scsi-SQEMU_QEMU_HARDDISK_494ee814-0dd9-4f0f-8082-b266e2c53997'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:02.718638 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.718649 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_086c202d-0ccf-4be9-aa6b-e4e971478b82', 'scsi-SQEMU_QEMU_HARDDISK_086c202d-0ccf-4be9-aa6b-e4e971478b82'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:02.718660 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.718671 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-02-10-08-33-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:02.718682 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.718693 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.718703 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.718765 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.718791 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--89c58721--f175--5d0e--8750--3436c1d71ced-osd--block--89c58721--f175--5d0e--8750--3436c1d71ced', 'dm-uuid-LVM-JnoSO34eYdGnWTsCakmewB44wX9WXEtdmovb6sRP6nxMebcoeelGfZ966qJm6W0U'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.718816 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4aefdc38-3054-474e-a34a-07d97ce8643d', 'scsi-SQEMU_QEMU_HARDDISK_4aefdc38-3054-474e-a34a-07d97ce8643d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4aefdc38-3054-474e-a34a-07d97ce8643d-part1', 'scsi-SQEMU_QEMU_HARDDISK_4aefdc38-3054-474e-a34a-07d97ce8643d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4aefdc38-3054-474e-a34a-07d97ce8643d-part14', 'scsi-SQEMU_QEMU_HARDDISK_4aefdc38-3054-474e-a34a-07d97ce8643d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4aefdc38-3054-474e-a34a-07d97ce8643d-part15', 'scsi-SQEMU_QEMU_HARDDISK_4aefdc38-3054-474e-a34a-07d97ce8643d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4aefdc38-3054-474e-a34a-07d97ce8643d-part16', 'scsi-SQEMU_QEMU_HARDDISK_4aefdc38-3054-474e-a34a-07d97ce8643d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:02.718839 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--989340a3--ac62--57b3--a342--92d58018bc1c-osd--block--989340a3--ac62--57b3--a342--92d58018bc1c', 'dm-uuid-LVM-kpj38q7QthTyMHyxijih2mNuaS0gaM14qrVuDa0sBq3IdYilV1K0Dtyrg5332VUh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.718851 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.718924 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--5101bad7--da03--58be--8044--cbe4500fcec9-osd--block--5101bad7--da03--58be--8044--cbe4500fcec9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AfV1Ut-imLT-GJEx-qQZs-mDvO-OD8D-loiCNw', 'scsi-0QEMU_QEMU_HARDDISK_103f3392-831d-4ee6-b0f0-d6be015816d3', 'scsi-SQEMU_QEMU_HARDDISK_103f3392-831d-4ee6-b0f0-d6be015816d3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:02.718966 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.718978 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.718988 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.718999 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d59ecc87--3940--56cd--881a--fbc914ec02de-osd--block--d59ecc87--3940--56cd--881a--fbc914ec02de'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-RICemi-Zbh1-pks0-7V8C-6Pf8-EuYC-pPo3u4', 'scsi-0QEMU_QEMU_HARDDISK_23794fae-2c08-458a-becf-a15050b8218b', 'scsi-SQEMU_QEMU_HARDDISK_23794fae-2c08-458a-becf-a15050b8218b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:02.719009 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.719020 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_492baa9f-f661-44dd-a3d2-70d79942748c', 'scsi-SQEMU_QEMU_HARDDISK_492baa9f-f661-44dd-a3d2-70d79942748c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:02.719030 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.719114 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-02-10-08-33-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:02.719132 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.719143 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.719153 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:02.719174 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bb1a57e-a3aa-41a1-8378-2ba5a5124dde', 'scsi-SQEMU_QEMU_HARDDISK_7bb1a57e-a3aa-41a1-8378-2ba5a5124dde'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bb1a57e-a3aa-41a1-8378-2ba5a5124dde-part1', 'scsi-SQEMU_QEMU_HARDDISK_7bb1a57e-a3aa-41a1-8378-2ba5a5124dde-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bb1a57e-a3aa-41a1-8378-2ba5a5124dde-part14', 'scsi-SQEMU_QEMU_HARDDISK_7bb1a57e-a3aa-41a1-8378-2ba5a5124dde-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bb1a57e-a3aa-41a1-8378-2ba5a5124dde-part15', 'scsi-SQEMU_QEMU_HARDDISK_7bb1a57e-a3aa-41a1-8378-2ba5a5124dde-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bb1a57e-a3aa-41a1-8378-2ba5a5124dde-part16', 'scsi-SQEMU_QEMU_HARDDISK_7bb1a57e-a3aa-41a1-8378-2ba5a5124dde-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:02.719246 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--89c58721--f175--5d0e--8750--3436c1d71ced-osd--block--89c58721--f175--5d0e--8750--3436c1d71ced'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-u2fbYc-N0Pr-avQV-caWe-H1nP-U6AI-72y8iY', 'scsi-0QEMU_QEMU_HARDDISK_a31d8f91-c02a-4f65-9bd6-abd5e53b34f2', 'scsi-SQEMU_QEMU_HARDDISK_a31d8f91-c02a-4f65-9bd6-abd5e53b34f2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:02.719387 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--989340a3--ac62--57b3--a342--92d58018bc1c-osd--block--989340a3--ac62--57b3--a342--92d58018bc1c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2lFnkL-eTjr-jX59-b9Ca-Rsp5-OoNK-4ot2XJ', 'scsi-0QEMU_QEMU_HARDDISK_be832b54-23bf-4f17-8551-69f0e04b6625', 'scsi-SQEMU_QEMU_HARDDISK_be832b54-23bf-4f17-8551-69f0e04b6625'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:02.719403 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_809e68db-7594-4e4e-90c0-4a7ae6eb5d4d', 'scsi-SQEMU_QEMU_HARDDISK_809e68db-7594-4e4e-90c0-4a7ae6eb5d4d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:02.719415 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-02-10-08-33-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:02.719427 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.719438 | orchestrator | 2025-02-10 09:33:02.719450 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-02-10 09:33:02.719461 | orchestrator | Monday 10 February 2025 09:19:11 +0000 (0:00:03.722) 0:00:43.271 ******* 2025-02-10 09:33:02.719472 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.719483 | orchestrator | 2025-02-10 09:33:02.719493 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-02-10 09:33:02.719504 | orchestrator | Monday 10 February 2025 09:19:11 +0000 (0:00:00.323) 0:00:43.594 ******* 2025-02-10 09:33:02.719514 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.719525 | orchestrator | 2025-02-10 09:33:02.719535 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-02-10 09:33:02.719546 | orchestrator | Monday 10 February 2025 09:19:11 +0000 (0:00:00.182) 0:00:43.777 ******* 2025-02-10 09:33:02.719557 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.719567 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.719578 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.719588 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.719611 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.719622 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.719633 | orchestrator | 2025-02-10 09:33:02.719643 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-02-10 09:33:02.719654 | orchestrator | Monday 10 February 2025 09:19:13 +0000 (0:00:01.219) 0:00:44.997 ******* 2025-02-10 09:33:02.719664 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.719675 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.719685 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.719695 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.719706 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.719716 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.719726 | orchestrator | 2025-02-10 09:33:02.719737 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-02-10 09:33:02.719748 | orchestrator | Monday 10 February 2025 09:19:14 +0000 (0:00:01.614) 0:00:46.612 ******* 2025-02-10 09:33:02.719758 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.719768 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.719779 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.719789 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.719799 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.719810 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.719820 | orchestrator | 2025-02-10 09:33:02.719830 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-02-10 09:33:02.719841 | orchestrator | Monday 10 February 2025 09:19:15 +0000 (0:00:01.155) 0:00:47.767 ******* 2025-02-10 09:33:02.719851 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.719862 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.719872 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.719885 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.719896 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.720047 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.720069 | orchestrator | 2025-02-10 09:33:02.720082 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-02-10 09:33:02.720094 | orchestrator | Monday 10 February 2025 09:19:16 +0000 (0:00:00.968) 0:00:48.735 ******* 2025-02-10 09:33:02.720106 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.720118 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.720130 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.720141 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.720153 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.720165 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.720176 | orchestrator | 2025-02-10 09:33:02.720188 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-02-10 09:33:02.720200 | orchestrator | Monday 10 February 2025 09:19:18 +0000 (0:00:01.248) 0:00:49.984 ******* 2025-02-10 09:33:02.720212 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.720223 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.720235 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.720245 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.720256 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.720266 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.720277 | orchestrator | 2025-02-10 09:33:02.720287 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-02-10 09:33:02.720303 | orchestrator | Monday 10 February 2025 09:19:19 +0000 (0:00:01.289) 0:00:51.273 ******* 2025-02-10 09:33:02.720314 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.720325 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.720335 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.720345 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.720355 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.720365 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.720376 | orchestrator | 2025-02-10 09:33:02.720386 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-02-10 09:33:02.720423 | orchestrator | Monday 10 February 2025 09:19:20 +0000 (0:00:01.421) 0:00:52.695 ******* 2025-02-10 09:33:02.720434 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-10 09:33:02.720449 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-02-10 09:33:02.720459 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-10 09:33:02.720470 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-02-10 09:33:02.720480 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-10 09:33:02.720490 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.720501 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-02-10 09:33:02.720511 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-02-10 09:33:02.720522 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-10 09:33:02.720532 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.720542 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-02-10 09:33:02.720552 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-02-10 09:33:02.720563 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-10 09:33:02.720573 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-02-10 09:33:02.720583 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.720594 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-02-10 09:33:02.720604 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-02-10 09:33:02.720615 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-10 09:33:02.720625 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.720635 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-02-10 09:33:02.720646 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.720657 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-02-10 09:33:02.720667 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-02-10 09:33:02.720678 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.720688 | orchestrator | 2025-02-10 09:33:02.720698 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-02-10 09:33:02.720709 | orchestrator | Monday 10 February 2025 09:19:24 +0000 (0:00:03.217) 0:00:55.913 ******* 2025-02-10 09:33:02.720719 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-10 09:33:02.720730 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-02-10 09:33:02.720740 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-10 09:33:02.720751 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-02-10 09:33:02.720761 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-02-10 09:33:02.720772 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-02-10 09:33:02.720782 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-10 09:33:02.720792 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.720803 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-02-10 09:33:02.720813 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-10 09:33:02.720823 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.720834 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-02-10 09:33:02.720844 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-02-10 09:33:02.720854 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.720865 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-10 09:33:02.720875 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-02-10 09:33:02.720886 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-10 09:33:02.720896 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-02-10 09:33:02.720912 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.720923 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-02-10 09:33:02.721014 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.721031 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-02-10 09:33:02.721042 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-02-10 09:33:02.721052 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.721062 | orchestrator | 2025-02-10 09:33:02.721073 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-02-10 09:33:02.721083 | orchestrator | Monday 10 February 2025 09:19:27 +0000 (0:00:03.878) 0:00:59.792 ******* 2025-02-10 09:33:02.721094 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-02-10 09:33:02.721105 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-02-10 09:33:02.721116 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-02-10 09:33:02.721126 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-02-10 09:33:02.721137 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-02-10 09:33:02.721148 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-02-10 09:33:02.721158 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-02-10 09:33:02.721168 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-02-10 09:33:02.721179 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-02-10 09:33:02.721189 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-02-10 09:33:02.721200 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-02-10 09:33:02.721210 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-02-10 09:33:02.721220 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-02-10 09:33:02.721239 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-02-10 09:33:02.721250 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-02-10 09:33:02.721261 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-02-10 09:33:02.721271 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-02-10 09:33:02.721281 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-02-10 09:33:02.721292 | orchestrator | 2025-02-10 09:33:02.721302 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-02-10 09:33:02.721313 | orchestrator | Monday 10 February 2025 09:19:35 +0000 (0:00:08.011) 0:01:07.804 ******* 2025-02-10 09:33:02.721323 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-10 09:33:02.721334 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-10 09:33:02.721349 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-02-10 09:33:02.721360 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-02-10 09:33:02.721371 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-02-10 09:33:02.721381 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-10 09:33:02.721392 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-02-10 09:33:02.721402 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-02-10 09:33:02.721412 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-02-10 09:33:02.721423 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.721438 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-10 09:33:02.721449 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-10 09:33:02.721460 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-10 09:33:02.721470 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.721481 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-02-10 09:33:02.721492 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-02-10 09:33:02.721502 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-02-10 09:33:02.721513 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.721530 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.721541 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.721551 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-02-10 09:33:02.721630 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-02-10 09:33:02.721642 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-02-10 09:33:02.721654 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.721666 | orchestrator | 2025-02-10 09:33:02.721678 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-02-10 09:33:02.721690 | orchestrator | Monday 10 February 2025 09:19:38 +0000 (0:00:02.220) 0:01:10.024 ******* 2025-02-10 09:33:02.721702 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-10 09:33:02.721714 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-10 09:33:02.721727 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-10 09:33:02.721739 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.721750 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-02-10 09:33:02.721762 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-02-10 09:33:02.721773 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-02-10 09:33:02.721785 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-02-10 09:33:02.721796 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-02-10 09:33:02.721807 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.721819 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-02-10 09:33:02.721831 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-10 09:33:02.721843 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-10 09:33:02.721855 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-10 09:33:02.721867 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.721878 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.721890 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-02-10 09:33:02.721998 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-02-10 09:33:02.722038 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-02-10 09:33:02.722051 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.722061 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-02-10 09:33:02.722071 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-02-10 09:33:02.722081 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-02-10 09:33:02.722091 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.722102 | orchestrator | 2025-02-10 09:33:02.722112 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-02-10 09:33:02.722122 | orchestrator | Monday 10 February 2025 09:19:39 +0000 (0:00:01.197) 0:01:11.222 ******* 2025-02-10 09:33:02.722133 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-02-10 09:33:02.722143 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-02-10 09:33:02.722154 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-02-10 09:33:02.722165 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-02-10 09:33:02.722175 | orchestrator | ok: [testbed-node-1] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'}) 2025-02-10 09:33:02.722186 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-02-10 09:33:02.722196 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-02-10 09:33:02.722206 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-02-10 09:33:02.722225 | orchestrator | ok: [testbed-node-2] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'}) 2025-02-10 09:33:02.722235 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-02-10 09:33:02.722245 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-02-10 09:33:02.722255 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-02-10 09:33:02.722265 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.722276 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-02-10 09:33:02.722286 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-02-10 09:33:02.722295 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-02-10 09:33:02.722305 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.722316 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-02-10 09:33:02.722326 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-02-10 09:33:02.722336 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-02-10 09:33:02.722346 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.722356 | orchestrator | 2025-02-10 09:33:02.722366 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-02-10 09:33:02.722376 | orchestrator | Monday 10 February 2025 09:19:40 +0000 (0:00:01.563) 0:01:12.786 ******* 2025-02-10 09:33:02.722386 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.722398 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.722416 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.722435 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:33:02.722454 | orchestrator | 2025-02-10 09:33:02.722470 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-10 09:33:02.722481 | orchestrator | Monday 10 February 2025 09:19:42 +0000 (0:00:01.792) 0:01:14.579 ******* 2025-02-10 09:33:02.722528 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.722540 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.722550 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.722560 | orchestrator | 2025-02-10 09:33:02.722571 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-10 09:33:02.722581 | orchestrator | Monday 10 February 2025 09:19:43 +0000 (0:00:00.740) 0:01:15.319 ******* 2025-02-10 09:33:02.722591 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.722601 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.722612 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.722623 | orchestrator | 2025-02-10 09:33:02.722633 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-10 09:33:02.722644 | orchestrator | Monday 10 February 2025 09:19:44 +0000 (0:00:00.924) 0:01:16.243 ******* 2025-02-10 09:33:02.722654 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.722686 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.722698 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.722708 | orchestrator | 2025-02-10 09:33:02.722718 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-10 09:33:02.722729 | orchestrator | Monday 10 February 2025 09:19:45 +0000 (0:00:00.822) 0:01:17.065 ******* 2025-02-10 09:33:02.722739 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.722749 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.722760 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.722770 | orchestrator | 2025-02-10 09:33:02.722780 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-10 09:33:02.722875 | orchestrator | Monday 10 February 2025 09:19:46 +0000 (0:00:01.086) 0:01:18.152 ******* 2025-02-10 09:33:02.722891 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:33:02.722902 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:33:02.722920 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:33:02.722963 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.722984 | orchestrator | 2025-02-10 09:33:02.723001 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-10 09:33:02.723017 | orchestrator | Monday 10 February 2025 09:19:47 +0000 (0:00:00.848) 0:01:19.001 ******* 2025-02-10 09:33:02.723032 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:33:02.723043 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:33:02.723053 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:33:02.723063 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.723073 | orchestrator | 2025-02-10 09:33:02.723083 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-10 09:33:02.723094 | orchestrator | Monday 10 February 2025 09:19:48 +0000 (0:00:01.210) 0:01:20.211 ******* 2025-02-10 09:33:02.723103 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:33:02.723119 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:33:02.723129 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:33:02.723139 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.723149 | orchestrator | 2025-02-10 09:33:02.723159 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:33:02.723169 | orchestrator | Monday 10 February 2025 09:19:49 +0000 (0:00:01.050) 0:01:21.261 ******* 2025-02-10 09:33:02.723179 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.723190 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.723200 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.723210 | orchestrator | 2025-02-10 09:33:02.723220 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-10 09:33:02.723230 | orchestrator | Monday 10 February 2025 09:19:50 +0000 (0:00:01.014) 0:01:22.276 ******* 2025-02-10 09:33:02.723240 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-02-10 09:33:02.723250 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-02-10 09:33:02.723260 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-02-10 09:33:02.723270 | orchestrator | 2025-02-10 09:33:02.723280 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-10 09:33:02.723290 | orchestrator | Monday 10 February 2025 09:19:51 +0000 (0:00:01.465) 0:01:23.741 ******* 2025-02-10 09:33:02.723300 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.723310 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.723320 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.723330 | orchestrator | 2025-02-10 09:33:02.723340 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:33:02.723351 | orchestrator | Monday 10 February 2025 09:19:52 +0000 (0:00:00.718) 0:01:24.460 ******* 2025-02-10 09:33:02.723361 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.723371 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.723382 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.723392 | orchestrator | 2025-02-10 09:33:02.723402 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-10 09:33:02.723412 | orchestrator | Monday 10 February 2025 09:19:53 +0000 (0:00:00.704) 0:01:25.164 ******* 2025-02-10 09:33:02.723422 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-10 09:33:02.723432 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.723442 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-10 09:33:02.723452 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.723463 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-10 09:33:02.723473 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.723491 | orchestrator | 2025-02-10 09:33:02.723503 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-10 09:33:02.723517 | orchestrator | Monday 10 February 2025 09:19:54 +0000 (0:00:00.944) 0:01:26.109 ******* 2025-02-10 09:33:02.723532 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-02-10 09:33:02.723546 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.723561 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-02-10 09:33:02.723575 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.723596 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-02-10 09:33:02.723611 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.723625 | orchestrator | 2025-02-10 09:33:02.723639 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-10 09:33:02.723652 | orchestrator | Monday 10 February 2025 09:19:55 +0000 (0:00:01.065) 0:01:27.175 ******* 2025-02-10 09:33:02.723664 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:33:02.723677 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:33:02.723689 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:33:02.723702 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.723715 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-02-10 09:33:02.723727 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-02-10 09:33:02.723739 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-02-10 09:33:02.723752 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-02-10 09:33:02.723764 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-02-10 09:33:02.723777 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.723882 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-02-10 09:33:02.723901 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.723914 | orchestrator | 2025-02-10 09:33:02.723927 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-02-10 09:33:02.723999 | orchestrator | Monday 10 February 2025 09:19:56 +0000 (0:00:00.881) 0:01:28.057 ******* 2025-02-10 09:33:02.724013 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.724025 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.724038 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.724050 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.724063 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.724075 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.724088 | orchestrator | 2025-02-10 09:33:02.724100 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-02-10 09:33:02.724113 | orchestrator | Monday 10 February 2025 09:19:57 +0000 (0:00:01.111) 0:01:29.168 ******* 2025-02-10 09:33:02.724125 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-02-10 09:33:02.724138 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-10 09:33:02.724151 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-10 09:33:02.724164 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-02-10 09:33:02.724176 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-02-10 09:33:02.724189 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-02-10 09:33:02.724202 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-02-10 09:33:02.724214 | orchestrator | 2025-02-10 09:33:02.724226 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-02-10 09:33:02.724247 | orchestrator | Monday 10 February 2025 09:19:58 +0000 (0:00:01.110) 0:01:30.278 ******* 2025-02-10 09:33:02.724260 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-02-10 09:33:02.724272 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-10 09:33:02.724285 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-10 09:33:02.724297 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-02-10 09:33:02.724311 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-02-10 09:33:02.724331 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-02-10 09:33:02.724351 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-02-10 09:33:02.724371 | orchestrator | 2025-02-10 09:33:02.724446 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-02-10 09:33:02.724461 | orchestrator | Monday 10 February 2025 09:20:00 +0000 (0:00:01.989) 0:01:32.268 ******* 2025-02-10 09:33:02.724474 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:33:02.724488 | orchestrator | 2025-02-10 09:33:02.724501 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-02-10 09:33:02.724514 | orchestrator | Monday 10 February 2025 09:20:01 +0000 (0:00:01.035) 0:01:33.304 ******* 2025-02-10 09:33:02.724526 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.724538 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.724550 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.724562 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.724573 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.724585 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.724598 | orchestrator | 2025-02-10 09:33:02.724610 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-02-10 09:33:02.724621 | orchestrator | Monday 10 February 2025 09:20:02 +0000 (0:00:00.703) 0:01:34.008 ******* 2025-02-10 09:33:02.724633 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.724646 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.724657 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.724669 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.724680 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.724691 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.724703 | orchestrator | 2025-02-10 09:33:02.724715 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-02-10 09:33:02.724726 | orchestrator | Monday 10 February 2025 09:20:03 +0000 (0:00:01.307) 0:01:35.315 ******* 2025-02-10 09:33:02.724738 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.724749 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.724761 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.724823 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.724835 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.724848 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.724860 | orchestrator | 2025-02-10 09:33:02.724872 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-02-10 09:33:02.724883 | orchestrator | Monday 10 February 2025 09:20:04 +0000 (0:00:00.982) 0:01:36.298 ******* 2025-02-10 09:33:02.724893 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.724903 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.724913 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.724923 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.724954 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.724965 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.724975 | orchestrator | 2025-02-10 09:33:02.724986 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-02-10 09:33:02.725005 | orchestrator | Monday 10 February 2025 09:20:05 +0000 (0:00:01.427) 0:01:37.725 ******* 2025-02-10 09:33:02.725015 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.725030 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.725124 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.725140 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.725151 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.725162 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.725173 | orchestrator | 2025-02-10 09:33:02.725184 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-02-10 09:33:02.725195 | orchestrator | Monday 10 February 2025 09:20:06 +0000 (0:00:00.922) 0:01:38.648 ******* 2025-02-10 09:33:02.725206 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.725217 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.725228 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.725238 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.725249 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.725260 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.725270 | orchestrator | 2025-02-10 09:33:02.725281 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-02-10 09:33:02.725292 | orchestrator | Monday 10 February 2025 09:20:07 +0000 (0:00:01.045) 0:01:39.694 ******* 2025-02-10 09:33:02.725303 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.725314 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.725325 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.725335 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.725347 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.725358 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.725370 | orchestrator | 2025-02-10 09:33:02.725387 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-02-10 09:33:02.725400 | orchestrator | Monday 10 February 2025 09:20:08 +0000 (0:00:00.737) 0:01:40.431 ******* 2025-02-10 09:33:02.725410 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.725430 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.725440 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.725450 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.725460 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.725470 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.725480 | orchestrator | 2025-02-10 09:33:02.725491 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-02-10 09:33:02.725508 | orchestrator | Monday 10 February 2025 09:20:09 +0000 (0:00:00.784) 0:01:41.216 ******* 2025-02-10 09:33:02.725526 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.725541 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.725552 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.725562 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.725572 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.725582 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.725592 | orchestrator | 2025-02-10 09:33:02.725602 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-02-10 09:33:02.725612 | orchestrator | Monday 10 February 2025 09:20:10 +0000 (0:00:00.712) 0:01:41.928 ******* 2025-02-10 09:33:02.725622 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.725632 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.725642 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.725652 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.725663 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.725673 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.725683 | orchestrator | 2025-02-10 09:33:02.725693 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-02-10 09:33:02.725704 | orchestrator | Monday 10 February 2025 09:20:11 +0000 (0:00:00.921) 0:01:42.850 ******* 2025-02-10 09:33:02.725713 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.725723 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.725742 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.725754 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.725767 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.725778 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.725790 | orchestrator | 2025-02-10 09:33:02.725802 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-02-10 09:33:02.725814 | orchestrator | Monday 10 February 2025 09:20:12 +0000 (0:00:01.049) 0:01:43.900 ******* 2025-02-10 09:33:02.725825 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.725837 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.725848 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.725859 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.725870 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.725883 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.725894 | orchestrator | 2025-02-10 09:33:02.725906 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-02-10 09:33:02.725918 | orchestrator | Monday 10 February 2025 09:20:12 +0000 (0:00:00.906) 0:01:44.807 ******* 2025-02-10 09:33:02.725981 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.725996 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.726009 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.726047 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.726061 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.726073 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.726084 | orchestrator | 2025-02-10 09:33:02.726094 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-02-10 09:33:02.726104 | orchestrator | Monday 10 February 2025 09:20:13 +0000 (0:00:00.691) 0:01:45.498 ******* 2025-02-10 09:33:02.726113 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.726122 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.726130 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.726139 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.726147 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.726156 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.726165 | orchestrator | 2025-02-10 09:33:02.726173 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-02-10 09:33:02.726182 | orchestrator | Monday 10 February 2025 09:20:14 +0000 (0:00:00.731) 0:01:46.229 ******* 2025-02-10 09:33:02.726191 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.726199 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.726208 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.726216 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.726225 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.726234 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.726242 | orchestrator | 2025-02-10 09:33:02.726255 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-02-10 09:33:02.726331 | orchestrator | Monday 10 February 2025 09:20:15 +0000 (0:00:00.754) 0:01:46.984 ******* 2025-02-10 09:33:02.726344 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.726353 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.726361 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.726370 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.726382 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.726391 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.726400 | orchestrator | 2025-02-10 09:33:02.726409 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-02-10 09:33:02.726418 | orchestrator | Monday 10 February 2025 09:20:16 +0000 (0:00:01.063) 0:01:48.048 ******* 2025-02-10 09:33:02.726427 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.726435 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.726444 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.726452 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.726460 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.726469 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.726484 | orchestrator | 2025-02-10 09:33:02.726493 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-02-10 09:33:02.726502 | orchestrator | Monday 10 February 2025 09:20:16 +0000 (0:00:00.624) 0:01:48.672 ******* 2025-02-10 09:33:02.726511 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.726520 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.726528 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.726536 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.726545 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.726554 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.726563 | orchestrator | 2025-02-10 09:33:02.726571 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-02-10 09:33:02.726580 | orchestrator | Monday 10 February 2025 09:20:17 +0000 (0:00:00.803) 0:01:49.475 ******* 2025-02-10 09:33:02.726589 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.726597 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.726606 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.726615 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.726623 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.726631 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.726640 | orchestrator | 2025-02-10 09:33:02.726648 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-02-10 09:33:02.726657 | orchestrator | Monday 10 February 2025 09:20:18 +0000 (0:00:00.614) 0:01:50.089 ******* 2025-02-10 09:33:02.726666 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.726674 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.726683 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.726691 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.726700 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.726709 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.726717 | orchestrator | 2025-02-10 09:33:02.726728 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-02-10 09:33:02.726742 | orchestrator | Monday 10 February 2025 09:20:19 +0000 (0:00:00.816) 0:01:50.906 ******* 2025-02-10 09:33:02.726751 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.726760 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.726768 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.726798 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.726810 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.726819 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.726827 | orchestrator | 2025-02-10 09:33:02.726836 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-02-10 09:33:02.726845 | orchestrator | Monday 10 February 2025 09:20:19 +0000 (0:00:00.761) 0:01:51.668 ******* 2025-02-10 09:33:02.726853 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.726862 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.726871 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.726880 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.726888 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.726897 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.726905 | orchestrator | 2025-02-10 09:33:02.726914 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-02-10 09:33:02.726923 | orchestrator | Monday 10 February 2025 09:20:20 +0000 (0:00:00.793) 0:01:52.462 ******* 2025-02-10 09:33:02.726947 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.726963 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.726978 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.726992 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.727007 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.727017 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.727027 | orchestrator | 2025-02-10 09:33:02.727036 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-02-10 09:33:02.727046 | orchestrator | Monday 10 February 2025 09:20:21 +0000 (0:00:00.613) 0:01:53.075 ******* 2025-02-10 09:33:02.727061 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.727072 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.727082 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.727091 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.727101 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.727111 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.727121 | orchestrator | 2025-02-10 09:33:02.727130 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-02-10 09:33:02.727140 | orchestrator | Monday 10 February 2025 09:20:22 +0000 (0:00:00.798) 0:01:53.874 ******* 2025-02-10 09:33:02.727149 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.727164 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.727174 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.727184 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.727193 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.727203 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.727213 | orchestrator | 2025-02-10 09:33:02.727223 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-02-10 09:33:02.727233 | orchestrator | Monday 10 February 2025 09:20:22 +0000 (0:00:00.809) 0:01:54.683 ******* 2025-02-10 09:33:02.727242 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.727252 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.727261 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.727271 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.727280 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.727290 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.727300 | orchestrator | 2025-02-10 09:33:02.727371 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-02-10 09:33:02.727384 | orchestrator | Monday 10 February 2025 09:20:24 +0000 (0:00:01.205) 0:01:55.889 ******* 2025-02-10 09:33:02.727392 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.727401 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.727410 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.727418 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.727427 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.727436 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.727444 | orchestrator | 2025-02-10 09:33:02.727453 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-02-10 09:33:02.727462 | orchestrator | Monday 10 February 2025 09:20:25 +0000 (0:00:01.072) 0:01:56.962 ******* 2025-02-10 09:33:02.727471 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.727479 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.727487 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.727496 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.727504 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.727513 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.727521 | orchestrator | 2025-02-10 09:33:02.727530 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-02-10 09:33:02.727539 | orchestrator | Monday 10 February 2025 09:20:26 +0000 (0:00:01.855) 0:01:58.817 ******* 2025-02-10 09:33:02.727548 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.727556 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.727565 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.727573 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.727582 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.727590 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.727599 | orchestrator | 2025-02-10 09:33:02.727608 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-02-10 09:33:02.727616 | orchestrator | Monday 10 February 2025 09:20:27 +0000 (0:00:00.989) 0:01:59.806 ******* 2025-02-10 09:33:02.727625 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.727640 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.727649 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.727658 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.727666 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.727675 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.727683 | orchestrator | 2025-02-10 09:33:02.727692 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-02-10 09:33:02.727700 | orchestrator | Monday 10 February 2025 09:20:29 +0000 (0:00:01.163) 0:02:00.970 ******* 2025-02-10 09:33:02.727709 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.727718 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.727726 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.727735 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.727743 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.727751 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.727760 | orchestrator | 2025-02-10 09:33:02.727768 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-02-10 09:33:02.727777 | orchestrator | Monday 10 February 2025 09:20:29 +0000 (0:00:00.865) 0:02:01.835 ******* 2025-02-10 09:33:02.727786 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.727794 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.727803 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.727811 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.727820 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.727828 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.727837 | orchestrator | 2025-02-10 09:33:02.727845 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-02-10 09:33:02.727854 | orchestrator | Monday 10 February 2025 09:20:30 +0000 (0:00:00.945) 0:02:02.780 ******* 2025-02-10 09:33:02.727862 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-10 09:33:02.727871 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-10 09:33:02.727879 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.727888 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-10 09:33:02.727897 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-10 09:33:02.727905 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.727914 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-10 09:33:02.727923 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-10 09:33:02.727952 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.727962 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-10 09:33:02.727971 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-10 09:33:02.727979 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.727988 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-10 09:33:02.727997 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-10 09:33:02.728007 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.728023 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-10 09:33:02.728033 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-10 09:33:02.728044 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.728054 | orchestrator | 2025-02-10 09:33:02.728068 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-02-10 09:33:02.728079 | orchestrator | Monday 10 February 2025 09:20:31 +0000 (0:00:00.785) 0:02:03.566 ******* 2025-02-10 09:33:02.728089 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-02-10 09:33:02.728099 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-02-10 09:33:02.728109 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.728119 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-02-10 09:33:02.728129 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-02-10 09:33:02.728139 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.728149 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-02-10 09:33:02.728164 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-02-10 09:33:02.728228 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.728241 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-02-10 09:33:02.728251 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-02-10 09:33:02.728261 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.728271 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-02-10 09:33:02.728281 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-02-10 09:33:02.728291 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.728300 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-02-10 09:33:02.728310 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-02-10 09:33:02.728320 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.728329 | orchestrator | 2025-02-10 09:33:02.728340 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-02-10 09:33:02.728349 | orchestrator | Monday 10 February 2025 09:20:32 +0000 (0:00:00.904) 0:02:04.471 ******* 2025-02-10 09:33:02.728359 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.728369 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.728377 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.728386 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.728394 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.728403 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.728411 | orchestrator | 2025-02-10 09:33:02.728420 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-02-10 09:33:02.728429 | orchestrator | Monday 10 February 2025 09:20:33 +0000 (0:00:00.646) 0:02:05.117 ******* 2025-02-10 09:33:02.728438 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.728447 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.728455 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.728464 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.728472 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.728481 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.728490 | orchestrator | 2025-02-10 09:33:02.728499 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-10 09:33:02.728508 | orchestrator | Monday 10 February 2025 09:20:34 +0000 (0:00:00.747) 0:02:05.865 ******* 2025-02-10 09:33:02.728517 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.728525 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.728534 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.728542 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.728551 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.728559 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.728568 | orchestrator | 2025-02-10 09:33:02.728577 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-10 09:33:02.728585 | orchestrator | Monday 10 February 2025 09:20:34 +0000 (0:00:00.563) 0:02:06.428 ******* 2025-02-10 09:33:02.728594 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.728602 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.728611 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.728619 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.728628 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.728636 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.728645 | orchestrator | 2025-02-10 09:33:02.728653 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-10 09:33:02.728662 | orchestrator | Monday 10 February 2025 09:20:35 +0000 (0:00:00.770) 0:02:07.198 ******* 2025-02-10 09:33:02.728670 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.728679 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.728687 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.728696 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.728710 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.728719 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.728727 | orchestrator | 2025-02-10 09:33:02.728736 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-10 09:33:02.728745 | orchestrator | Monday 10 February 2025 09:20:35 +0000 (0:00:00.606) 0:02:07.805 ******* 2025-02-10 09:33:02.728753 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.728762 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.728774 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.728786 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.728794 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.728803 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.728811 | orchestrator | 2025-02-10 09:33:02.728820 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-10 09:33:02.728828 | orchestrator | Monday 10 February 2025 09:20:36 +0000 (0:00:00.754) 0:02:08.560 ******* 2025-02-10 09:33:02.728837 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-10 09:33:02.728846 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-10 09:33:02.728855 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-10 09:33:02.728863 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.728871 | orchestrator | 2025-02-10 09:33:02.728880 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-10 09:33:02.728888 | orchestrator | Monday 10 February 2025 09:20:37 +0000 (0:00:00.501) 0:02:09.062 ******* 2025-02-10 09:33:02.728897 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-10 09:33:02.728906 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-10 09:33:02.728914 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-10 09:33:02.728923 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.728991 | orchestrator | 2025-02-10 09:33:02.729003 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-10 09:33:02.729012 | orchestrator | Monday 10 February 2025 09:20:37 +0000 (0:00:00.481) 0:02:09.543 ******* 2025-02-10 09:33:02.729020 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-10 09:33:02.729029 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-10 09:33:02.729042 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-10 09:33:02.729108 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.729121 | orchestrator | 2025-02-10 09:33:02.729131 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:33:02.729140 | orchestrator | Monday 10 February 2025 09:20:38 +0000 (0:00:00.443) 0:02:09.986 ******* 2025-02-10 09:33:02.729149 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.729158 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.729167 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.729176 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.729189 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.729198 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.729207 | orchestrator | 2025-02-10 09:33:02.729216 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-10 09:33:02.729224 | orchestrator | Monday 10 February 2025 09:20:39 +0000 (0:00:00.875) 0:02:10.861 ******* 2025-02-10 09:33:02.729233 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-10 09:33:02.729242 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.729251 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-10 09:33:02.729260 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.729268 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-10 09:33:02.729277 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.729286 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-10 09:33:02.729295 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.729310 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-10 09:33:02.729319 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.729328 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-10 09:33:02.729336 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.729344 | orchestrator | 2025-02-10 09:33:02.729352 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-10 09:33:02.729361 | orchestrator | Monday 10 February 2025 09:20:39 +0000 (0:00:00.896) 0:02:11.758 ******* 2025-02-10 09:33:02.729369 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.729377 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.729386 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.729394 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.729402 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.729411 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.729419 | orchestrator | 2025-02-10 09:33:02.729427 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:33:02.729436 | orchestrator | Monday 10 February 2025 09:20:40 +0000 (0:00:00.910) 0:02:12.668 ******* 2025-02-10 09:33:02.729444 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.729452 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.729460 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.729468 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.729476 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.729485 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.729493 | orchestrator | 2025-02-10 09:33:02.729501 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-10 09:33:02.729509 | orchestrator | Monday 10 February 2025 09:20:41 +0000 (0:00:00.710) 0:02:13.379 ******* 2025-02-10 09:33:02.729517 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-10 09:33:02.729525 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.729534 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-10 09:33:02.729542 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.729550 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-10 09:33:02.729559 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.729567 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-10 09:33:02.729575 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.729583 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-10 09:33:02.729591 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.729600 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-10 09:33:02.729608 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.729616 | orchestrator | 2025-02-10 09:33:02.729625 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-10 09:33:02.729633 | orchestrator | Monday 10 February 2025 09:20:42 +0000 (0:00:01.327) 0:02:14.707 ******* 2025-02-10 09:33:02.729641 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.729650 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.729658 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.729666 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-02-10 09:33:02.729674 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.729683 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-02-10 09:33:02.729691 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.729699 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-02-10 09:33:02.729708 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.729716 | orchestrator | 2025-02-10 09:33:02.729731 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-10 09:33:02.729741 | orchestrator | Monday 10 February 2025 09:20:43 +0000 (0:00:00.947) 0:02:15.654 ******* 2025-02-10 09:33:02.729755 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-10 09:33:02.729764 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-10 09:33:02.729814 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-10 09:33:02.729823 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.729833 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-02-10 09:33:02.729842 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-02-10 09:33:02.729851 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-02-10 09:33:02.729861 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.729870 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-02-10 09:33:02.729951 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-02-10 09:33:02.729965 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-02-10 09:33:02.729975 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.729984 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:33:02.729994 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:33:02.730008 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:33:02.730037 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-02-10 09:33:02.730048 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-02-10 09:33:02.730057 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-02-10 09:33:02.730066 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.730075 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.730084 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-02-10 09:33:02.730094 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-02-10 09:33:02.730103 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-02-10 09:33:02.730110 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.730118 | orchestrator | 2025-02-10 09:33:02.730126 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-02-10 09:33:02.730134 | orchestrator | Monday 10 February 2025 09:20:45 +0000 (0:00:02.109) 0:02:17.764 ******* 2025-02-10 09:33:02.730142 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.730150 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.730158 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.730166 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.730174 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.730182 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.730189 | orchestrator | 2025-02-10 09:33:02.730198 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-02-10 09:33:02.730206 | orchestrator | Monday 10 February 2025 09:20:47 +0000 (0:00:02.025) 0:02:19.789 ******* 2025-02-10 09:33:02.730214 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.730221 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.730229 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.730237 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-10 09:33:02.730246 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.730254 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-02-10 09:33:02.730262 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.730270 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-02-10 09:33:02.730277 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.730285 | orchestrator | 2025-02-10 09:33:02.730293 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-02-10 09:33:02.730301 | orchestrator | Monday 10 February 2025 09:20:49 +0000 (0:00:01.747) 0:02:21.536 ******* 2025-02-10 09:33:02.730309 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.730322 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.730330 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.730344 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.730353 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.730361 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.730369 | orchestrator | 2025-02-10 09:33:02.730376 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-02-10 09:33:02.730384 | orchestrator | Monday 10 February 2025 09:20:51 +0000 (0:00:01.594) 0:02:23.131 ******* 2025-02-10 09:33:02.730392 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.730400 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.730408 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.730416 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.730424 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.730448 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.730456 | orchestrator | 2025-02-10 09:33:02.730464 | orchestrator | TASK [ceph-container-common : generate systemd ceph-mon target file] *********** 2025-02-10 09:33:02.730472 | orchestrator | Monday 10 February 2025 09:20:52 +0000 (0:00:01.353) 0:02:24.484 ******* 2025-02-10 09:33:02.730480 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:33:02.730488 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:02.730496 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:33:02.730504 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:33:02.730511 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:33:02.730519 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:33:02.730527 | orchestrator | 2025-02-10 09:33:02.730535 | orchestrator | TASK [ceph-container-common : enable ceph.target] ****************************** 2025-02-10 09:33:02.730543 | orchestrator | Monday 10 February 2025 09:20:54 +0000 (0:00:01.759) 0:02:26.243 ******* 2025-02-10 09:33:02.730551 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:02.730559 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:33:02.730567 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:33:02.730574 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:33:02.730582 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:33:02.730590 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:33:02.730598 | orchestrator | 2025-02-10 09:33:02.730606 | orchestrator | TASK [ceph-container-common : include prerequisites.yml] *********************** 2025-02-10 09:33:02.730614 | orchestrator | Monday 10 February 2025 09:20:56 +0000 (0:00:02.187) 0:02:28.431 ******* 2025-02-10 09:33:02.730622 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:33:02.730631 | orchestrator | 2025-02-10 09:33:02.730638 | orchestrator | TASK [ceph-container-common : stop lvmetad] ************************************ 2025-02-10 09:33:02.730646 | orchestrator | Monday 10 February 2025 09:20:58 +0000 (0:00:01.470) 0:02:29.902 ******* 2025-02-10 09:33:02.730654 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.730662 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.730670 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.730678 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.730686 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.730693 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.730701 | orchestrator | 2025-02-10 09:33:02.730762 | orchestrator | TASK [ceph-container-common : disable and mask lvmetad service] **************** 2025-02-10 09:33:02.730774 | orchestrator | Monday 10 February 2025 09:20:59 +0000 (0:00:01.038) 0:02:30.940 ******* 2025-02-10 09:33:02.730782 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.730791 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.730799 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.730807 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.730815 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.730823 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.730832 | orchestrator | 2025-02-10 09:33:02.730840 | orchestrator | TASK [ceph-container-common : remove ceph udev rules] ************************** 2025-02-10 09:33:02.730855 | orchestrator | Monday 10 February 2025 09:20:59 +0000 (0:00:00.803) 0:02:31.743 ******* 2025-02-10 09:33:02.730863 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-02-10 09:33:02.730872 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-02-10 09:33:02.730880 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-02-10 09:33:02.730888 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-02-10 09:33:02.730896 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-02-10 09:33:02.730904 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-02-10 09:33:02.730912 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-02-10 09:33:02.730921 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-02-10 09:33:02.730929 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-02-10 09:33:02.731081 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-02-10 09:33:02.731100 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-02-10 09:33:02.731108 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-02-10 09:33:02.731116 | orchestrator | 2025-02-10 09:33:02.731124 | orchestrator | TASK [ceph-container-common : ensure tmpfiles.d is present] ******************** 2025-02-10 09:33:02.731132 | orchestrator | Monday 10 February 2025 09:21:01 +0000 (0:00:01.982) 0:02:33.726 ******* 2025-02-10 09:33:02.731139 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:02.731148 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:33:02.731156 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:33:02.731164 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:33:02.731171 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:33:02.731179 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:33:02.731187 | orchestrator | 2025-02-10 09:33:02.731195 | orchestrator | TASK [ceph-container-common : restore certificates selinux context] ************ 2025-02-10 09:33:02.731203 | orchestrator | Monday 10 February 2025 09:21:03 +0000 (0:00:01.226) 0:02:34.953 ******* 2025-02-10 09:33:02.731210 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.731218 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.731226 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.731234 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.731242 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.731249 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.731257 | orchestrator | 2025-02-10 09:33:02.731312 | orchestrator | TASK [ceph-container-common : include registry.yml] **************************** 2025-02-10 09:33:02.731321 | orchestrator | Monday 10 February 2025 09:21:04 +0000 (0:00:01.092) 0:02:36.045 ******* 2025-02-10 09:33:02.731329 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.731337 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.731344 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.731352 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.731364 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.731373 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.731380 | orchestrator | 2025-02-10 09:33:02.731389 | orchestrator | TASK [ceph-container-common : include fetch_image.yml] ************************* 2025-02-10 09:33:02.731398 | orchestrator | Monday 10 February 2025 09:21:04 +0000 (0:00:00.711) 0:02:36.756 ******* 2025-02-10 09:33:02.731407 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:33:02.731417 | orchestrator | 2025-02-10 09:33:02.731426 | orchestrator | TASK [ceph-container-common : pulling nexus.testbed.osism.xyz:8193/osism/ceph-daemon:17.2.7 image] *** 2025-02-10 09:33:02.731444 | orchestrator | Monday 10 February 2025 09:21:06 +0000 (0:00:01.619) 0:02:38.376 ******* 2025-02-10 09:33:02.731454 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.731463 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.731472 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.731481 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.731490 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.731499 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.731507 | orchestrator | 2025-02-10 09:33:02.731515 | orchestrator | TASK [ceph-container-common : pulling alertmanager/prometheus/grafana container images] *** 2025-02-10 09:33:02.731523 | orchestrator | Monday 10 February 2025 09:21:32 +0000 (0:00:25.955) 0:03:04.332 ******* 2025-02-10 09:33:02.731531 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-02-10 09:33:02.731538 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-02-10 09:33:02.731546 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-02-10 09:33:02.731554 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.731562 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-02-10 09:33:02.731570 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-02-10 09:33:02.731659 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-02-10 09:33:02.731670 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.731677 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-02-10 09:33:02.731685 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-02-10 09:33:02.731693 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-02-10 09:33:02.731700 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.731708 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-02-10 09:33:02.731716 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-02-10 09:33:02.731724 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-02-10 09:33:02.731733 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.731740 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-02-10 09:33:02.731748 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-02-10 09:33:02.731756 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-02-10 09:33:02.731763 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.731770 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-02-10 09:33:02.731777 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-02-10 09:33:02.731784 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-02-10 09:33:02.731791 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.731798 | orchestrator | 2025-02-10 09:33:02.731805 | orchestrator | TASK [ceph-container-common : pulling node-exporter container image] *********** 2025-02-10 09:33:02.731816 | orchestrator | Monday 10 February 2025 09:21:33 +0000 (0:00:00.847) 0:03:05.179 ******* 2025-02-10 09:33:02.731823 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.731830 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.731837 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.731844 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.731860 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.731867 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.731874 | orchestrator | 2025-02-10 09:33:02.731881 | orchestrator | TASK [ceph-container-common : export local ceph dev image] ********************* 2025-02-10 09:33:02.731888 | orchestrator | Monday 10 February 2025 09:21:34 +0000 (0:00:00.811) 0:03:05.990 ******* 2025-02-10 09:33:02.731895 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.731907 | orchestrator | 2025-02-10 09:33:02.731914 | orchestrator | TASK [ceph-container-common : copy ceph dev image file] ************************ 2025-02-10 09:33:02.731921 | orchestrator | Monday 10 February 2025 09:21:34 +0000 (0:00:00.455) 0:03:06.446 ******* 2025-02-10 09:33:02.731927 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.731958 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.731968 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.731975 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.731982 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.731989 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.731995 | orchestrator | 2025-02-10 09:33:02.732003 | orchestrator | TASK [ceph-container-common : load ceph dev image] ***************************** 2025-02-10 09:33:02.732010 | orchestrator | Monday 10 February 2025 09:21:35 +0000 (0:00:00.724) 0:03:07.170 ******* 2025-02-10 09:33:02.732017 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.732024 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.732031 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.732037 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.732044 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.732051 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.732058 | orchestrator | 2025-02-10 09:33:02.732065 | orchestrator | TASK [ceph-container-common : remove tmp ceph dev image file] ****************** 2025-02-10 09:33:02.732072 | orchestrator | Monday 10 February 2025 09:21:36 +0000 (0:00:01.015) 0:03:08.186 ******* 2025-02-10 09:33:02.732079 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.732086 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.732093 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.732100 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.732106 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.732113 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.732120 | orchestrator | 2025-02-10 09:33:02.732127 | orchestrator | TASK [ceph-container-common : get ceph version] ******************************** 2025-02-10 09:33:02.732134 | orchestrator | Monday 10 February 2025 09:21:37 +0000 (0:00:00.803) 0:03:08.990 ******* 2025-02-10 09:33:02.732141 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.732147 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.732154 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.732161 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.732168 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.732174 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.732181 | orchestrator | 2025-02-10 09:33:02.732188 | orchestrator | TASK [ceph-container-common : set_fact ceph_version ceph_version.stdout.split] *** 2025-02-10 09:33:02.732195 | orchestrator | Monday 10 February 2025 09:21:40 +0000 (0:00:03.331) 0:03:12.321 ******* 2025-02-10 09:33:02.732202 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.732208 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.732215 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.732222 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.732244 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.732252 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.732259 | orchestrator | 2025-02-10 09:33:02.732266 | orchestrator | TASK [ceph-container-common : include release.yml] ***************************** 2025-02-10 09:33:02.732273 | orchestrator | Monday 10 February 2025 09:21:41 +0000 (0:00:00.937) 0:03:13.258 ******* 2025-02-10 09:33:02.732280 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:33:02.732289 | orchestrator | 2025-02-10 09:33:02.732344 | orchestrator | TASK [ceph-container-common : set_fact ceph_release jewel] ********************* 2025-02-10 09:33:02.732355 | orchestrator | Monday 10 February 2025 09:21:43 +0000 (0:00:02.007) 0:03:15.265 ******* 2025-02-10 09:33:02.732362 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.732375 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.732383 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.732396 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.732403 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.732410 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.732418 | orchestrator | 2025-02-10 09:33:02.732425 | orchestrator | TASK [ceph-container-common : set_fact ceph_release kraken] ******************** 2025-02-10 09:33:02.732433 | orchestrator | Monday 10 February 2025 09:21:44 +0000 (0:00:01.198) 0:03:16.464 ******* 2025-02-10 09:33:02.732440 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.732447 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.732454 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.732462 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.732469 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.732476 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.732484 | orchestrator | 2025-02-10 09:33:02.732491 | orchestrator | TASK [ceph-container-common : set_fact ceph_release luminous] ****************** 2025-02-10 09:33:02.732498 | orchestrator | Monday 10 February 2025 09:21:45 +0000 (0:00:00.663) 0:03:17.127 ******* 2025-02-10 09:33:02.732506 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.732513 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.732520 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.732527 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.732534 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.732542 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.732549 | orchestrator | 2025-02-10 09:33:02.732556 | orchestrator | TASK [ceph-container-common : set_fact ceph_release mimic] ********************* 2025-02-10 09:33:02.732563 | orchestrator | Monday 10 February 2025 09:21:46 +0000 (0:00:01.182) 0:03:18.309 ******* 2025-02-10 09:33:02.732570 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.732578 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.732585 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.732592 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.732599 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.732607 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.732614 | orchestrator | 2025-02-10 09:33:02.732621 | orchestrator | TASK [ceph-container-common : set_fact ceph_release nautilus] ****************** 2025-02-10 09:33:02.732628 | orchestrator | Monday 10 February 2025 09:21:47 +0000 (0:00:00.945) 0:03:19.255 ******* 2025-02-10 09:33:02.732636 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.732643 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.732650 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.732657 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.732664 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.732671 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.732679 | orchestrator | 2025-02-10 09:33:02.732686 | orchestrator | TASK [ceph-container-common : set_fact ceph_release octopus] ******************* 2025-02-10 09:33:02.732693 | orchestrator | Monday 10 February 2025 09:21:48 +0000 (0:00:01.004) 0:03:20.260 ******* 2025-02-10 09:33:02.732701 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.732708 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.732715 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.732723 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.732730 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.732737 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.732744 | orchestrator | 2025-02-10 09:33:02.732751 | orchestrator | TASK [ceph-container-common : set_fact ceph_release pacific] ******************* 2025-02-10 09:33:02.732759 | orchestrator | Monday 10 February 2025 09:21:49 +0000 (0:00:01.058) 0:03:21.318 ******* 2025-02-10 09:33:02.732766 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.732773 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.732780 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.732788 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.732795 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.732802 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.732814 | orchestrator | 2025-02-10 09:33:02.732821 | orchestrator | TASK [ceph-container-common : set_fact ceph_release quincy] ******************** 2025-02-10 09:33:02.732828 | orchestrator | Monday 10 February 2025 09:21:50 +0000 (0:00:01.485) 0:03:22.803 ******* 2025-02-10 09:33:02.732836 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.732843 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.732850 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.732857 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.732864 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.732871 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.732878 | orchestrator | 2025-02-10 09:33:02.732886 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-02-10 09:33:02.732896 | orchestrator | Monday 10 February 2025 09:21:52 +0000 (0:00:01.714) 0:03:24.518 ******* 2025-02-10 09:33:02.732904 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:33:02.732911 | orchestrator | 2025-02-10 09:33:02.732919 | orchestrator | TASK [ceph-config : create ceph initial directories] *************************** 2025-02-10 09:33:02.732926 | orchestrator | Monday 10 February 2025 09:21:54 +0000 (0:00:01.567) 0:03:26.086 ******* 2025-02-10 09:33:02.732952 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-02-10 09:33:02.732960 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-02-10 09:33:02.732967 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-02-10 09:33:02.732973 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-02-10 09:33:02.732982 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-02-10 09:33:02.732990 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-02-10 09:33:02.733001 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-02-10 09:33:02.733009 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-02-10 09:33:02.733061 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-02-10 09:33:02.733071 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-02-10 09:33:02.733078 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-02-10 09:33:02.733085 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-02-10 09:33:02.733092 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-02-10 09:33:02.733099 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-02-10 09:33:02.733106 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-02-10 09:33:02.733157 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-02-10 09:33:02.733166 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-02-10 09:33:02.733173 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-02-10 09:33:02.733181 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-02-10 09:33:02.733188 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-02-10 09:33:02.733194 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-02-10 09:33:02.733202 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-02-10 09:33:02.733209 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-02-10 09:33:02.733216 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-02-10 09:33:02.733222 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-02-10 09:33:02.733230 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-02-10 09:33:02.733237 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-02-10 09:33:02.733244 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-02-10 09:33:02.733251 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-02-10 09:33:02.733263 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-02-10 09:33:02.733276 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-02-10 09:33:02.733283 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-02-10 09:33:02.733290 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-02-10 09:33:02.733296 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-02-10 09:33:02.733303 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-02-10 09:33:02.733310 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-02-10 09:33:02.733317 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-02-10 09:33:02.733324 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-02-10 09:33:02.733331 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-02-10 09:33:02.733338 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-02-10 09:33:02.733344 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-02-10 09:33:02.733351 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-02-10 09:33:02.733358 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-02-10 09:33:02.733365 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-02-10 09:33:02.733372 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-02-10 09:33:02.733379 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-02-10 09:33:02.733386 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-02-10 09:33:02.733393 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-02-10 09:33:02.733400 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-02-10 09:33:02.733407 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-02-10 09:33:02.733413 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-02-10 09:33:02.733420 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-02-10 09:33:02.733427 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-02-10 09:33:02.733434 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-02-10 09:33:02.733441 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-02-10 09:33:02.733448 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-02-10 09:33:02.733455 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-02-10 09:33:02.733470 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-02-10 09:33:02.733477 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-02-10 09:33:02.733484 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-02-10 09:33:02.733491 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-02-10 09:33:02.733498 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-02-10 09:33:02.733505 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-02-10 09:33:02.733512 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-02-10 09:33:02.733519 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-02-10 09:33:02.733526 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-02-10 09:33:02.733532 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-02-10 09:33:02.733613 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-02-10 09:33:02.733625 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-02-10 09:33:02.733632 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-02-10 09:33:02.733644 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-02-10 09:33:02.733651 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-02-10 09:33:02.733658 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-02-10 09:33:02.733665 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-02-10 09:33:02.733672 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-02-10 09:33:02.733679 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-02-10 09:33:02.733686 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-02-10 09:33:02.733693 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-02-10 09:33:02.733700 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-02-10 09:33:02.733707 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-02-10 09:33:02.733714 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-02-10 09:33:02.733721 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-02-10 09:33:02.733728 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-02-10 09:33:02.733735 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-02-10 09:33:02.733741 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-02-10 09:33:02.733748 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-02-10 09:33:02.733755 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-02-10 09:33:02.733762 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-02-10 09:33:02.733769 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-02-10 09:33:02.733776 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-02-10 09:33:02.733783 | orchestrator | 2025-02-10 09:33:02.733790 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-02-10 09:33:02.733797 | orchestrator | Monday 10 February 2025 09:22:01 +0000 (0:00:07.187) 0:03:33.273 ******* 2025-02-10 09:33:02.733804 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.733811 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.733818 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.733829 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:33:02.733837 | orchestrator | 2025-02-10 09:33:02.733845 | orchestrator | TASK [ceph-config : create rados gateway instance directories] ***************** 2025-02-10 09:33:02.733853 | orchestrator | Monday 10 February 2025 09:22:03 +0000 (0:00:01.879) 0:03:35.153 ******* 2025-02-10 09:33:02.733860 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-02-10 09:33:02.733868 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-02-10 09:33:02.733876 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-02-10 09:33:02.733884 | orchestrator | 2025-02-10 09:33:02.733891 | orchestrator | TASK [ceph-config : generate environment file] ********************************* 2025-02-10 09:33:02.733899 | orchestrator | Monday 10 February 2025 09:22:04 +0000 (0:00:01.201) 0:03:36.355 ******* 2025-02-10 09:33:02.733906 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-02-10 09:33:02.733914 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-02-10 09:33:02.733922 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-02-10 09:33:02.733929 | orchestrator | 2025-02-10 09:33:02.733963 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-02-10 09:33:02.733970 | orchestrator | Monday 10 February 2025 09:22:06 +0000 (0:00:01.581) 0:03:37.936 ******* 2025-02-10 09:33:02.733977 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.733984 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.733991 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.733998 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.734005 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.734012 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.734039 | orchestrator | 2025-02-10 09:33:02.734046 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-02-10 09:33:02.734053 | orchestrator | Monday 10 February 2025 09:22:08 +0000 (0:00:02.020) 0:03:39.957 ******* 2025-02-10 09:33:02.734060 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.734067 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.734074 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.734081 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.734087 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.734094 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.734101 | orchestrator | 2025-02-10 09:33:02.734108 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-02-10 09:33:02.734115 | orchestrator | Monday 10 February 2025 09:22:09 +0000 (0:00:01.007) 0:03:40.965 ******* 2025-02-10 09:33:02.734122 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.734174 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.734186 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.734193 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.734201 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.734208 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.734215 | orchestrator | 2025-02-10 09:33:02.734223 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-02-10 09:33:02.734230 | orchestrator | Monday 10 February 2025 09:22:10 +0000 (0:00:01.572) 0:03:42.537 ******* 2025-02-10 09:33:02.734237 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.734245 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.734252 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.734259 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.734266 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.734273 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.734280 | orchestrator | 2025-02-10 09:33:02.734288 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-02-10 09:33:02.734295 | orchestrator | Monday 10 February 2025 09:22:11 +0000 (0:00:00.893) 0:03:43.430 ******* 2025-02-10 09:33:02.734302 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.734310 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.734317 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.734324 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.734331 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.734338 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.734345 | orchestrator | 2025-02-10 09:33:02.734352 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-02-10 09:33:02.734360 | orchestrator | Monday 10 February 2025 09:22:13 +0000 (0:00:01.606) 0:03:45.037 ******* 2025-02-10 09:33:02.734367 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.734374 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.734381 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.734388 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.734395 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.734403 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.734410 | orchestrator | 2025-02-10 09:33:02.734417 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-02-10 09:33:02.734424 | orchestrator | Monday 10 February 2025 09:22:14 +0000 (0:00:01.056) 0:03:46.093 ******* 2025-02-10 09:33:02.734437 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.734444 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.734452 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.734459 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.734466 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.734473 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.734480 | orchestrator | 2025-02-10 09:33:02.734488 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-02-10 09:33:02.734495 | orchestrator | Monday 10 February 2025 09:22:15 +0000 (0:00:01.280) 0:03:47.374 ******* 2025-02-10 09:33:02.734506 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.734514 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.734521 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.734528 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.734535 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.734542 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.734550 | orchestrator | 2025-02-10 09:33:02.734557 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-02-10 09:33:02.734564 | orchestrator | Monday 10 February 2025 09:22:16 +0000 (0:00:01.096) 0:03:48.470 ******* 2025-02-10 09:33:02.734571 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.734578 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.734586 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.734596 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.734604 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.734611 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.734618 | orchestrator | 2025-02-10 09:33:02.734626 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-02-10 09:33:02.734633 | orchestrator | Monday 10 February 2025 09:22:19 +0000 (0:00:02.778) 0:03:51.249 ******* 2025-02-10 09:33:02.734640 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.734647 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.734654 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.734662 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.734669 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.734676 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.734683 | orchestrator | 2025-02-10 09:33:02.734710 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-02-10 09:33:02.734718 | orchestrator | Monday 10 February 2025 09:22:20 +0000 (0:00:01.227) 0:03:52.476 ******* 2025-02-10 09:33:02.734726 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-10 09:33:02.734733 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-10 09:33:02.734740 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.734748 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-10 09:33:02.734755 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-10 09:33:02.734762 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.734769 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-10 09:33:02.734777 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-10 09:33:02.734784 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.734791 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-10 09:33:02.734798 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-10 09:33:02.734805 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.734814 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-10 09:33:02.734822 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-10 09:33:02.734830 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.734838 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-10 09:33:02.734846 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-10 09:33:02.734854 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.734862 | orchestrator | 2025-02-10 09:33:02.734870 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-02-10 09:33:02.734926 | orchestrator | Monday 10 February 2025 09:22:21 +0000 (0:00:00.783) 0:03:53.259 ******* 2025-02-10 09:33:02.734983 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-02-10 09:33:02.734992 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-02-10 09:33:02.735000 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.735008 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-02-10 09:33:02.735016 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-02-10 09:33:02.735023 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.735032 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-02-10 09:33:02.735040 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-02-10 09:33:02.735048 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.735055 | orchestrator | ok: [testbed-node-3] => (item=osd memory target) 2025-02-10 09:33:02.735063 | orchestrator | ok: [testbed-node-3] => (item=osd_memory_target) 2025-02-10 09:33:02.735071 | orchestrator | ok: [testbed-node-4] => (item=osd memory target) 2025-02-10 09:33:02.735079 | orchestrator | ok: [testbed-node-4] => (item=osd_memory_target) 2025-02-10 09:33:02.735087 | orchestrator | ok: [testbed-node-5] => (item=osd memory target) 2025-02-10 09:33:02.735095 | orchestrator | ok: [testbed-node-5] => (item=osd_memory_target) 2025-02-10 09:33:02.735103 | orchestrator | 2025-02-10 09:33:02.735111 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-02-10 09:33:02.735119 | orchestrator | Monday 10 February 2025 09:22:22 +0000 (0:00:01.308) 0:03:54.568 ******* 2025-02-10 09:33:02.735127 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.735134 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.735143 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.735151 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.735159 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.735167 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.735174 | orchestrator | 2025-02-10 09:33:02.735181 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-02-10 09:33:02.735187 | orchestrator | Monday 10 February 2025 09:22:23 +0000 (0:00:00.949) 0:03:55.518 ******* 2025-02-10 09:33:02.735194 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.735201 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.735208 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.735215 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.735222 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.735229 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.735236 | orchestrator | 2025-02-10 09:33:02.735243 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-10 09:33:02.735250 | orchestrator | Monday 10 February 2025 09:22:24 +0000 (0:00:01.046) 0:03:56.564 ******* 2025-02-10 09:33:02.735256 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.735263 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.735275 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.735283 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.735289 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.735296 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.735303 | orchestrator | 2025-02-10 09:33:02.735310 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-10 09:33:02.735317 | orchestrator | Monday 10 February 2025 09:22:25 +0000 (0:00:00.930) 0:03:57.495 ******* 2025-02-10 09:33:02.735324 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.735331 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.735338 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.735344 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.735351 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.735358 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.735370 | orchestrator | 2025-02-10 09:33:02.735377 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-10 09:33:02.735384 | orchestrator | Monday 10 February 2025 09:22:26 +0000 (0:00:00.953) 0:03:58.449 ******* 2025-02-10 09:33:02.735391 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.735398 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.735405 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.735411 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.735418 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.735425 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.735432 | orchestrator | 2025-02-10 09:33:02.735439 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-10 09:33:02.735446 | orchestrator | Monday 10 February 2025 09:22:27 +0000 (0:00:00.860) 0:03:59.309 ******* 2025-02-10 09:33:02.735453 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.735459 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.735466 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.735479 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.735486 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.735493 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.735500 | orchestrator | 2025-02-10 09:33:02.735507 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-10 09:33:02.735514 | orchestrator | Monday 10 February 2025 09:22:28 +0000 (0:00:01.191) 0:04:00.500 ******* 2025-02-10 09:33:02.735521 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-10 09:33:02.735527 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-10 09:33:02.735534 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-10 09:33:02.735551 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.735558 | orchestrator | 2025-02-10 09:33:02.735565 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-10 09:33:02.735572 | orchestrator | Monday 10 February 2025 09:22:29 +0000 (0:00:00.485) 0:04:00.985 ******* 2025-02-10 09:33:02.735579 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-10 09:33:02.735586 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-10 09:33:02.735593 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-10 09:33:02.735600 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.735608 | orchestrator | 2025-02-10 09:33:02.735660 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-10 09:33:02.735670 | orchestrator | Monday 10 February 2025 09:22:29 +0000 (0:00:00.772) 0:04:01.758 ******* 2025-02-10 09:33:02.735677 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-10 09:33:02.735684 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-10 09:33:02.735691 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-10 09:33:02.735697 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.735705 | orchestrator | 2025-02-10 09:33:02.735712 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:33:02.735719 | orchestrator | Monday 10 February 2025 09:22:30 +0000 (0:00:00.531) 0:04:02.289 ******* 2025-02-10 09:33:02.735725 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.735733 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.735739 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.735746 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.735753 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.735760 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.735767 | orchestrator | 2025-02-10 09:33:02.735774 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-10 09:33:02.735781 | orchestrator | Monday 10 February 2025 09:22:31 +0000 (0:00:00.624) 0:04:02.914 ******* 2025-02-10 09:33:02.735788 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-10 09:33:02.735795 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.735808 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-10 09:33:02.735816 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.735823 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-10 09:33:02.735829 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.735836 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-02-10 09:33:02.735843 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-02-10 09:33:02.735850 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-02-10 09:33:02.735857 | orchestrator | 2025-02-10 09:33:02.735864 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-10 09:33:02.735871 | orchestrator | Monday 10 February 2025 09:22:32 +0000 (0:00:01.516) 0:04:04.430 ******* 2025-02-10 09:33:02.735878 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.735885 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.735892 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.735899 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.735905 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.735912 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.735919 | orchestrator | 2025-02-10 09:33:02.735926 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:33:02.735948 | orchestrator | Monday 10 February 2025 09:22:33 +0000 (0:00:00.719) 0:04:05.150 ******* 2025-02-10 09:33:02.735961 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.735973 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.735983 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.735995 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.736003 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.736009 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.736017 | orchestrator | 2025-02-10 09:33:02.736025 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-10 09:33:02.736033 | orchestrator | Monday 10 February 2025 09:22:34 +0000 (0:00:01.134) 0:04:06.285 ******* 2025-02-10 09:33:02.736041 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-10 09:33:02.736049 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.736057 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-10 09:33:02.736066 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-10 09:33:02.736073 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.736081 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-10 09:33:02.736089 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.736097 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.736110 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-10 09:33:02.736118 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.736126 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-10 09:33:02.736148 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.736156 | orchestrator | 2025-02-10 09:33:02.736164 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-10 09:33:02.736176 | orchestrator | Monday 10 February 2025 09:22:35 +0000 (0:00:01.412) 0:04:07.697 ******* 2025-02-10 09:33:02.736184 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.736192 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.736200 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.736208 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-02-10 09:33:02.736216 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.736224 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-02-10 09:33:02.736232 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.736240 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-02-10 09:33:02.736254 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.736262 | orchestrator | 2025-02-10 09:33:02.736270 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-10 09:33:02.736278 | orchestrator | Monday 10 February 2025 09:22:37 +0000 (0:00:01.284) 0:04:08.982 ******* 2025-02-10 09:33:02.736285 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-10 09:33:02.736294 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-10 09:33:02.736303 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-10 09:33:02.736312 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.736322 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-02-10 09:33:02.736384 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-02-10 09:33:02.736396 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-02-10 09:33:02.736405 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.736424 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-02-10 09:33:02.736434 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-02-10 09:33:02.736443 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-02-10 09:33:02.736452 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.736461 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:33:02.736470 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-02-10 09:33:02.736479 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:33:02.736495 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-02-10 09:33:02.736505 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:33:02.736513 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.736522 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-02-10 09:33:02.736535 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-02-10 09:33:02.736545 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.736554 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-02-10 09:33:02.736563 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-02-10 09:33:02.736572 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.736581 | orchestrator | 2025-02-10 09:33:02.736590 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-02-10 09:33:02.736598 | orchestrator | Monday 10 February 2025 09:22:39 +0000 (0:00:02.110) 0:04:11.093 ******* 2025-02-10 09:33:02.736607 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:02.736616 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:33:02.736625 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:33:02.736634 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:33:02.736648 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:33:02.736657 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:33:02.736665 | orchestrator | 2025-02-10 09:33:02.736674 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-02-10 09:33:02.736682 | orchestrator | Monday 10 February 2025 09:22:46 +0000 (0:00:07.502) 0:04:18.596 ******* 2025-02-10 09:33:02.736690 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:02.736698 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:33:02.736706 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:33:02.736714 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:33:02.736722 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:33:02.736730 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:33:02.736738 | orchestrator | 2025-02-10 09:33:02.736746 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-02-10 09:33:02.736754 | orchestrator | Monday 10 February 2025 09:22:48 +0000 (0:00:01.957) 0:04:20.553 ******* 2025-02-10 09:33:02.736762 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.736770 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.736777 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.736791 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:33:02.736799 | orchestrator | 2025-02-10 09:33:02.736808 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-02-10 09:33:02.736816 | orchestrator | Monday 10 February 2025 09:22:50 +0000 (0:00:01.545) 0:04:22.099 ******* 2025-02-10 09:33:02.736824 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.736832 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.736840 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.736847 | orchestrator | 2025-02-10 09:33:02.736856 | orchestrator | TASK [ceph-handler : set _mon_handler_called before restart] ******************* 2025-02-10 09:33:02.736864 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:33:02.736872 | orchestrator | 2025-02-10 09:33:02.736880 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-02-10 09:33:02.736888 | orchestrator | Monday 10 February 2025 09:22:51 +0000 (0:00:01.311) 0:04:23.411 ******* 2025-02-10 09:33:02.736896 | orchestrator | 2025-02-10 09:33:02.736904 | orchestrator | TASK [ceph-handler : copy mon restart script] ********************************** 2025-02-10 09:33:02.736912 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:33:02.736920 | orchestrator | 2025-02-10 09:33:02.736928 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-02-10 09:33:02.736952 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:02.736960 | orchestrator | 2025-02-10 09:33:02.736968 | orchestrator | TASK [ceph-handler : copy mon restart script] ********************************** 2025-02-10 09:33:02.736976 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:33:02.736984 | orchestrator | 2025-02-10 09:33:02.736992 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-02-10 09:33:02.737000 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:33:02.737008 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:33:02.737016 | orchestrator | 2025-02-10 09:33:02.737024 | orchestrator | TASK [ceph-handler : copy mon restart script] ********************************** 2025-02-10 09:33:02.737032 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:33:02.737040 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.737048 | orchestrator | 2025-02-10 09:33:02.737056 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-02-10 09:33:02.737064 | orchestrator | Monday 10 February 2025 09:22:53 +0000 (0:00:01.482) 0:04:24.893 ******* 2025-02-10 09:33:02.737072 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-10 09:33:02.737080 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-10 09:33:02.737088 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-10 09:33:02.737096 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.737104 | orchestrator | 2025-02-10 09:33:02.737112 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-02-10 09:33:02.737168 | orchestrator | Monday 10 February 2025 09:22:53 +0000 (0:00:00.747) 0:04:25.641 ******* 2025-02-10 09:33:02.737180 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.737189 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.737198 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.737206 | orchestrator | 2025-02-10 09:33:02.737214 | orchestrator | TASK [ceph-handler : set _mon_handler_called after restart] ******************** 2025-02-10 09:33:02.737223 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.737231 | orchestrator | 2025-02-10 09:33:02.737239 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-02-10 09:33:02.737247 | orchestrator | Monday 10 February 2025 09:22:54 +0000 (0:00:00.524) 0:04:26.165 ******* 2025-02-10 09:33:02.737256 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.737264 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.737272 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.737286 | orchestrator | 2025-02-10 09:33:02.737295 | orchestrator | TASK [ceph-handler : osds handler] ********************************************* 2025-02-10 09:33:02.737303 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.737311 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.737320 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.737328 | orchestrator | 2025-02-10 09:33:02.737336 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-02-10 09:33:02.737344 | orchestrator | Monday 10 February 2025 09:22:55 +0000 (0:00:00.931) 0:04:27.097 ******* 2025-02-10 09:33:02.737353 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.737361 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.737369 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.737377 | orchestrator | 2025-02-10 09:33:02.737385 | orchestrator | TASK [ceph-handler : mdss handler] ********************************************* 2025-02-10 09:33:02.737394 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.737402 | orchestrator | 2025-02-10 09:33:02.737410 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-02-10 09:33:02.737419 | orchestrator | Monday 10 February 2025 09:22:55 +0000 (0:00:00.577) 0:04:27.674 ******* 2025-02-10 09:33:02.737427 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.737435 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.737444 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.737452 | orchestrator | 2025-02-10 09:33:02.737464 | orchestrator | TASK [ceph-handler : rgws handler] ********************************************* 2025-02-10 09:33:02.737473 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.737481 | orchestrator | 2025-02-10 09:33:02.737489 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-02-10 09:33:02.737498 | orchestrator | Monday 10 February 2025 09:22:56 +0000 (0:00:00.770) 0:04:28.445 ******* 2025-02-10 09:33:02.737506 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.737514 | orchestrator | 2025-02-10 09:33:02.737522 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-02-10 09:33:02.737531 | orchestrator | Monday 10 February 2025 09:22:56 +0000 (0:00:00.306) 0:04:28.751 ******* 2025-02-10 09:33:02.737539 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.737547 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.737555 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.737563 | orchestrator | 2025-02-10 09:33:02.737572 | orchestrator | TASK [ceph-handler : rbdmirrors handler] *************************************** 2025-02-10 09:33:02.737580 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.737588 | orchestrator | 2025-02-10 09:33:02.737597 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-02-10 09:33:02.737605 | orchestrator | Monday 10 February 2025 09:22:57 +0000 (0:00:00.623) 0:04:29.374 ******* 2025-02-10 09:33:02.737613 | orchestrator | 2025-02-10 09:33:02.737621 | orchestrator | TASK [ceph-handler : mgrs handler] ********************************************* 2025-02-10 09:33:02.737630 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.737638 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:33:02.737647 | orchestrator | 2025-02-10 09:33:02.737655 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-02-10 09:33:02.737663 | orchestrator | Monday 10 February 2025 09:22:58 +0000 (0:00:01.077) 0:04:30.452 ******* 2025-02-10 09:33:02.737671 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.737680 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.737688 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.737696 | orchestrator | 2025-02-10 09:33:02.737705 | orchestrator | TASK [ceph-handler : set _mgr_handler_called before restart] ******************* 2025-02-10 09:33:02.737713 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:33:02.737721 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:33:02.737730 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:33:02.737743 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.737751 | orchestrator | 2025-02-10 09:33:02.737759 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-02-10 09:33:02.737767 | orchestrator | Monday 10 February 2025 09:22:59 +0000 (0:00:00.962) 0:04:31.415 ******* 2025-02-10 09:33:02.737776 | orchestrator | 2025-02-10 09:33:02.737784 | orchestrator | TASK [ceph-handler : copy mgr restart script] ********************************** 2025-02-10 09:33:02.737792 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.737801 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.737822 | orchestrator | 2025-02-10 09:33:02.737831 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-02-10 09:33:02.737841 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:02.737855 | orchestrator | 2025-02-10 09:33:02.737865 | orchestrator | TASK [ceph-handler : copy mgr restart script] ********************************** 2025-02-10 09:33:02.737874 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.737884 | orchestrator | 2025-02-10 09:33:02.737893 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-02-10 09:33:02.737902 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:33:02.737911 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:33:02.737921 | orchestrator | 2025-02-10 09:33:02.737971 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-02-10 09:33:02.737982 | orchestrator | Monday 10 February 2025 09:23:01 +0000 (0:00:01.756) 0:04:33.171 ******* 2025-02-10 09:33:02.737991 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-10 09:33:02.738093 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-10 09:33:02.738108 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-10 09:33:02.738117 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.738125 | orchestrator | 2025-02-10 09:33:02.738134 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-02-10 09:33:02.738143 | orchestrator | Monday 10 February 2025 09:23:02 +0000 (0:00:01.583) 0:04:34.754 ******* 2025-02-10 09:33:02.738151 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.738160 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.738169 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.738177 | orchestrator | 2025-02-10 09:33:02.738186 | orchestrator | TASK [ceph-handler : set _mgr_handler_called after restart] ******************** 2025-02-10 09:33:02.738194 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.738203 | orchestrator | 2025-02-10 09:33:02.738212 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-02-10 09:33:02.738220 | orchestrator | Monday 10 February 2025 09:23:04 +0000 (0:00:01.099) 0:04:35.854 ******* 2025-02-10 09:33:02.738229 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:33:02.738238 | orchestrator | 2025-02-10 09:33:02.738246 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-02-10 09:33:02.738255 | orchestrator | Monday 10 February 2025 09:23:04 +0000 (0:00:00.888) 0:04:36.742 ******* 2025-02-10 09:33:02.738263 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.738272 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.738280 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.738289 | orchestrator | 2025-02-10 09:33:02.738297 | orchestrator | TASK [ceph-handler : rbd-target-api and rbd-target-gw handler] ***************** 2025-02-10 09:33:02.738306 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.738314 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.738323 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.738331 | orchestrator | 2025-02-10 09:33:02.738340 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-02-10 09:33:02.738349 | orchestrator | Monday 10 February 2025 09:23:06 +0000 (0:00:01.140) 0:04:37.883 ******* 2025-02-10 09:33:02.738357 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:33:02.738366 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:33:02.738374 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:33:02.738389 | orchestrator | 2025-02-10 09:33:02.738398 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-02-10 09:33:02.738407 | orchestrator | Monday 10 February 2025 09:23:07 +0000 (0:00:01.694) 0:04:39.577 ******* 2025-02-10 09:33:02.738415 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:02.738424 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:33:02.738432 | orchestrator | 2025-02-10 09:33:02.738445 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-02-10 09:33:02.738454 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:33:02.738463 | orchestrator | 2025-02-10 09:33:02.738471 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-02-10 09:33:02.738480 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:33:02.738488 | orchestrator | 2025-02-10 09:33:02.738497 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-02-10 09:33:02.738505 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:33:02.738514 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:33:02.738522 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.738531 | orchestrator | 2025-02-10 09:33:02.738540 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-02-10 09:33:02.738548 | orchestrator | Monday 10 February 2025 09:23:09 +0000 (0:00:01.583) 0:04:41.161 ******* 2025-02-10 09:33:02.738557 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.738565 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.738573 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.738582 | orchestrator | 2025-02-10 09:33:02.738590 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-02-10 09:33:02.738598 | orchestrator | Monday 10 February 2025 09:23:10 +0000 (0:00:01.251) 0:04:42.413 ******* 2025-02-10 09:33:02.738607 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:33:02.738615 | orchestrator | 2025-02-10 09:33:02.738634 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-02-10 09:33:02.738642 | orchestrator | Monday 10 February 2025 09:23:11 +0000 (0:00:01.070) 0:04:43.484 ******* 2025-02-10 09:33:02.738650 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.738658 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.738666 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.738674 | orchestrator | 2025-02-10 09:33:02.738682 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-02-10 09:33:02.738690 | orchestrator | Monday 10 February 2025 09:23:12 +0000 (0:00:00.452) 0:04:43.936 ******* 2025-02-10 09:33:02.738697 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:33:02.738705 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:33:02.738713 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:33:02.738721 | orchestrator | 2025-02-10 09:33:02.738739 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-02-10 09:33:02.738755 | orchestrator | Monday 10 February 2025 09:23:13 +0000 (0:00:01.677) 0:04:45.614 ******* 2025-02-10 09:33:02.738764 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:33:02.738773 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:33:02.738782 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:33:02.738791 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.738800 | orchestrator | 2025-02-10 09:33:02.738809 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-02-10 09:33:02.738818 | orchestrator | Monday 10 February 2025 09:23:15 +0000 (0:00:01.351) 0:04:46.965 ******* 2025-02-10 09:33:02.738827 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.738837 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.738846 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.738855 | orchestrator | 2025-02-10 09:33:02.738863 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-02-10 09:33:02.738953 | orchestrator | Monday 10 February 2025 09:23:15 +0000 (0:00:00.838) 0:04:47.804 ******* 2025-02-10 09:33:02.738967 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.738977 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.738986 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.738995 | orchestrator | 2025-02-10 09:33:02.739004 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-02-10 09:33:02.739014 | orchestrator | Monday 10 February 2025 09:23:16 +0000 (0:00:00.395) 0:04:48.199 ******* 2025-02-10 09:33:02.739023 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.739031 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.739040 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.739049 | orchestrator | 2025-02-10 09:33:02.739058 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-02-10 09:33:02.739067 | orchestrator | Monday 10 February 2025 09:23:16 +0000 (0:00:00.420) 0:04:48.620 ******* 2025-02-10 09:33:02.739076 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.739085 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.739095 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.739104 | orchestrator | 2025-02-10 09:33:02.739112 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-02-10 09:33:02.739120 | orchestrator | Monday 10 February 2025 09:23:17 +0000 (0:00:00.407) 0:04:49.028 ******* 2025-02-10 09:33:02.739128 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:33:02.739136 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:33:02.739144 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:33:02.739151 | orchestrator | 2025-02-10 09:33:02.739159 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-02-10 09:33:02.739167 | orchestrator | 2025-02-10 09:33:02.739175 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-02-10 09:33:02.739183 | orchestrator | Monday 10 February 2025 09:23:20 +0000 (0:00:03.083) 0:04:52.111 ******* 2025-02-10 09:33:02.739191 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:33:02.739199 | orchestrator | 2025-02-10 09:33:02.739207 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-02-10 09:33:02.739215 | orchestrator | Monday 10 February 2025 09:23:21 +0000 (0:00:00.997) 0:04:53.109 ******* 2025-02-10 09:33:02.739223 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.739230 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.739238 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.739246 | orchestrator | 2025-02-10 09:33:02.739254 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-02-10 09:33:02.739262 | orchestrator | Monday 10 February 2025 09:23:22 +0000 (0:00:01.520) 0:04:54.630 ******* 2025-02-10 09:33:02.739270 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.739278 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.739294 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.739302 | orchestrator | 2025-02-10 09:33:02.739310 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-02-10 09:33:02.739321 | orchestrator | Monday 10 February 2025 09:23:23 +0000 (0:00:00.480) 0:04:55.111 ******* 2025-02-10 09:33:02.739329 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.739337 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.739344 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.739352 | orchestrator | 2025-02-10 09:33:02.739360 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-02-10 09:33:02.739368 | orchestrator | Monday 10 February 2025 09:23:23 +0000 (0:00:00.490) 0:04:55.601 ******* 2025-02-10 09:33:02.739376 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.739384 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.739392 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.739400 | orchestrator | 2025-02-10 09:33:02.739408 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-02-10 09:33:02.739421 | orchestrator | Monday 10 February 2025 09:23:24 +0000 (0:00:00.480) 0:04:56.082 ******* 2025-02-10 09:33:02.739429 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.739437 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.739445 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.739453 | orchestrator | 2025-02-10 09:33:02.739461 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-02-10 09:33:02.739469 | orchestrator | Monday 10 February 2025 09:23:25 +0000 (0:00:01.251) 0:04:57.333 ******* 2025-02-10 09:33:02.739477 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.739498 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.739507 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.739515 | orchestrator | 2025-02-10 09:33:02.739523 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-02-10 09:33:02.739531 | orchestrator | Monday 10 February 2025 09:23:25 +0000 (0:00:00.445) 0:04:57.779 ******* 2025-02-10 09:33:02.739539 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.739547 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.739555 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.739563 | orchestrator | 2025-02-10 09:33:02.739570 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-02-10 09:33:02.739578 | orchestrator | Monday 10 February 2025 09:23:26 +0000 (0:00:00.558) 0:04:58.337 ******* 2025-02-10 09:33:02.739586 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.739594 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.739601 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.739609 | orchestrator | 2025-02-10 09:33:02.739617 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-02-10 09:33:02.739625 | orchestrator | Monday 10 February 2025 09:23:27 +0000 (0:00:00.503) 0:04:58.840 ******* 2025-02-10 09:33:02.739633 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.739641 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.739649 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.739657 | orchestrator | 2025-02-10 09:33:02.739665 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-02-10 09:33:02.739673 | orchestrator | Monday 10 February 2025 09:23:27 +0000 (0:00:00.931) 0:04:59.772 ******* 2025-02-10 09:33:02.739681 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.739688 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.739745 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.739757 | orchestrator | 2025-02-10 09:33:02.739766 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-02-10 09:33:02.739774 | orchestrator | Monday 10 February 2025 09:23:28 +0000 (0:00:00.467) 0:05:00.239 ******* 2025-02-10 09:33:02.739782 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.739791 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.739799 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.739807 | orchestrator | 2025-02-10 09:33:02.739816 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-02-10 09:33:02.739824 | orchestrator | Monday 10 February 2025 09:23:29 +0000 (0:00:01.061) 0:05:01.301 ******* 2025-02-10 09:33:02.739832 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.739840 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.739849 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.739857 | orchestrator | 2025-02-10 09:33:02.739866 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-02-10 09:33:02.739874 | orchestrator | Monday 10 February 2025 09:23:30 +0000 (0:00:00.697) 0:05:01.998 ******* 2025-02-10 09:33:02.739882 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.739890 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.739898 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.739906 | orchestrator | 2025-02-10 09:33:02.739915 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-02-10 09:33:02.739929 | orchestrator | Monday 10 February 2025 09:23:31 +0000 (0:00:01.011) 0:05:03.010 ******* 2025-02-10 09:33:02.739958 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.739972 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.739985 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.739995 | orchestrator | 2025-02-10 09:33:02.740003 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-02-10 09:33:02.740011 | orchestrator | Monday 10 February 2025 09:23:31 +0000 (0:00:00.510) 0:05:03.521 ******* 2025-02-10 09:33:02.740019 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.740027 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.740034 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.740042 | orchestrator | 2025-02-10 09:33:02.740050 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-02-10 09:33:02.740058 | orchestrator | Monday 10 February 2025 09:23:32 +0000 (0:00:00.507) 0:05:04.028 ******* 2025-02-10 09:33:02.740066 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.740074 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.740082 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.740090 | orchestrator | 2025-02-10 09:33:02.740098 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-02-10 09:33:02.740106 | orchestrator | Monday 10 February 2025 09:23:33 +0000 (0:00:00.931) 0:05:04.960 ******* 2025-02-10 09:33:02.740114 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.740122 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.740134 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.740142 | orchestrator | 2025-02-10 09:33:02.740150 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-02-10 09:33:02.740158 | orchestrator | Monday 10 February 2025 09:23:33 +0000 (0:00:00.419) 0:05:05.379 ******* 2025-02-10 09:33:02.740166 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.740174 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.740182 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.740190 | orchestrator | 2025-02-10 09:33:02.740198 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-02-10 09:33:02.740219 | orchestrator | Monday 10 February 2025 09:23:34 +0000 (0:00:00.465) 0:05:05.845 ******* 2025-02-10 09:33:02.740228 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.740236 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.740248 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.740256 | orchestrator | 2025-02-10 09:33:02.740264 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-02-10 09:33:02.740272 | orchestrator | Monday 10 February 2025 09:23:34 +0000 (0:00:00.468) 0:05:06.313 ******* 2025-02-10 09:33:02.740280 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.740288 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.740296 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.740304 | orchestrator | 2025-02-10 09:33:02.740312 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-02-10 09:33:02.740320 | orchestrator | Monday 10 February 2025 09:23:35 +0000 (0:00:00.755) 0:05:07.069 ******* 2025-02-10 09:33:02.740328 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.740336 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.740344 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.740351 | orchestrator | 2025-02-10 09:33:02.740360 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-02-10 09:33:02.740368 | orchestrator | Monday 10 February 2025 09:23:35 +0000 (0:00:00.523) 0:05:07.592 ******* 2025-02-10 09:33:02.740376 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.740385 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.740394 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.740403 | orchestrator | 2025-02-10 09:33:02.740412 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-02-10 09:33:02.740421 | orchestrator | Monday 10 February 2025 09:23:36 +0000 (0:00:00.416) 0:05:08.009 ******* 2025-02-10 09:33:02.740436 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.740445 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.740454 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.740464 | orchestrator | 2025-02-10 09:33:02.740473 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-02-10 09:33:02.740482 | orchestrator | Monday 10 February 2025 09:23:36 +0000 (0:00:00.453) 0:05:08.463 ******* 2025-02-10 09:33:02.740491 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.740500 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.740510 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.740519 | orchestrator | 2025-02-10 09:33:02.740528 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-02-10 09:33:02.740537 | orchestrator | Monday 10 February 2025 09:23:37 +0000 (0:00:00.764) 0:05:09.227 ******* 2025-02-10 09:33:02.740546 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.740555 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.740614 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.740627 | orchestrator | 2025-02-10 09:33:02.740636 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-02-10 09:33:02.740646 | orchestrator | Monday 10 February 2025 09:23:37 +0000 (0:00:00.442) 0:05:09.669 ******* 2025-02-10 09:33:02.740655 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.740664 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.740673 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.740683 | orchestrator | 2025-02-10 09:33:02.740692 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-02-10 09:33:02.740702 | orchestrator | Monday 10 February 2025 09:23:38 +0000 (0:00:00.649) 0:05:10.318 ******* 2025-02-10 09:33:02.740711 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.740720 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.740730 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.740738 | orchestrator | 2025-02-10 09:33:02.740747 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-02-10 09:33:02.740755 | orchestrator | Monday 10 February 2025 09:23:39 +0000 (0:00:00.586) 0:05:10.904 ******* 2025-02-10 09:33:02.740764 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.740772 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.740780 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.740789 | orchestrator | 2025-02-10 09:33:02.740797 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-02-10 09:33:02.740805 | orchestrator | Monday 10 February 2025 09:23:39 +0000 (0:00:00.796) 0:05:11.701 ******* 2025-02-10 09:33:02.740814 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.740822 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.740830 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.740839 | orchestrator | 2025-02-10 09:33:02.740847 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-02-10 09:33:02.740855 | orchestrator | Monday 10 February 2025 09:23:40 +0000 (0:00:00.549) 0:05:12.251 ******* 2025-02-10 09:33:02.740864 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.740872 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.740881 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.740889 | orchestrator | 2025-02-10 09:33:02.740897 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-02-10 09:33:02.740905 | orchestrator | Monday 10 February 2025 09:23:40 +0000 (0:00:00.503) 0:05:12.755 ******* 2025-02-10 09:33:02.740914 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.740922 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.740945 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.740954 | orchestrator | 2025-02-10 09:33:02.740962 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-02-10 09:33:02.740971 | orchestrator | Monday 10 February 2025 09:23:41 +0000 (0:00:00.514) 0:05:13.269 ******* 2025-02-10 09:33:02.740984 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.740992 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.741000 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.741008 | orchestrator | 2025-02-10 09:33:02.741016 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-02-10 09:33:02.741024 | orchestrator | Monday 10 February 2025 09:23:42 +0000 (0:00:00.809) 0:05:14.079 ******* 2025-02-10 09:33:02.741045 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-10 09:33:02.741054 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-10 09:33:02.741062 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.741070 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-10 09:33:02.741078 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-10 09:33:02.741086 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.741094 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-10 09:33:02.741112 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-10 09:33:02.741120 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.741128 | orchestrator | 2025-02-10 09:33:02.741136 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-02-10 09:33:02.741144 | orchestrator | Monday 10 February 2025 09:23:42 +0000 (0:00:00.574) 0:05:14.654 ******* 2025-02-10 09:33:02.741152 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-02-10 09:33:02.741160 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-02-10 09:33:02.741168 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.741176 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-02-10 09:33:02.741184 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-02-10 09:33:02.741192 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.741200 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-02-10 09:33:02.741208 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-02-10 09:33:02.741216 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.741224 | orchestrator | 2025-02-10 09:33:02.741232 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-02-10 09:33:02.741240 | orchestrator | Monday 10 February 2025 09:23:43 +0000 (0:00:00.606) 0:05:15.260 ******* 2025-02-10 09:33:02.741248 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.741256 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.741264 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.741272 | orchestrator | 2025-02-10 09:33:02.741280 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-02-10 09:33:02.741287 | orchestrator | Monday 10 February 2025 09:23:43 +0000 (0:00:00.541) 0:05:15.802 ******* 2025-02-10 09:33:02.741295 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.741303 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.741311 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.741325 | orchestrator | 2025-02-10 09:33:02.741333 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-10 09:33:02.741342 | orchestrator | Monday 10 February 2025 09:23:44 +0000 (0:00:00.786) 0:05:16.589 ******* 2025-02-10 09:33:02.741403 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.741416 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.741425 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.741433 | orchestrator | 2025-02-10 09:33:02.741441 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-10 09:33:02.741450 | orchestrator | Monday 10 February 2025 09:23:45 +0000 (0:00:00.538) 0:05:17.127 ******* 2025-02-10 09:33:02.741458 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.741466 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.741475 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.741489 | orchestrator | 2025-02-10 09:33:02.741497 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-10 09:33:02.741506 | orchestrator | Monday 10 February 2025 09:23:45 +0000 (0:00:00.635) 0:05:17.762 ******* 2025-02-10 09:33:02.741514 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.741522 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.741531 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.741539 | orchestrator | 2025-02-10 09:33:02.741547 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-10 09:33:02.741556 | orchestrator | Monday 10 February 2025 09:23:46 +0000 (0:00:00.452) 0:05:18.215 ******* 2025-02-10 09:33:02.741564 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.741572 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.741581 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.741589 | orchestrator | 2025-02-10 09:33:02.741597 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-10 09:33:02.741606 | orchestrator | Monday 10 February 2025 09:23:47 +0000 (0:00:00.750) 0:05:18.965 ******* 2025-02-10 09:33:02.741614 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-10 09:33:02.741622 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-10 09:33:02.741631 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-10 09:33:02.741639 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.741647 | orchestrator | 2025-02-10 09:33:02.741655 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-10 09:33:02.741664 | orchestrator | Monday 10 February 2025 09:23:47 +0000 (0:00:00.562) 0:05:19.527 ******* 2025-02-10 09:33:02.741672 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-10 09:33:02.741680 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-10 09:33:02.741689 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-10 09:33:02.741697 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.741705 | orchestrator | 2025-02-10 09:33:02.741714 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-10 09:33:02.741722 | orchestrator | Monday 10 February 2025 09:23:48 +0000 (0:00:00.597) 0:05:20.124 ******* 2025-02-10 09:33:02.741730 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-10 09:33:02.741739 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-10 09:33:02.741747 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-10 09:33:02.741755 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.741768 | orchestrator | 2025-02-10 09:33:02.741777 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:33:02.741785 | orchestrator | Monday 10 February 2025 09:23:48 +0000 (0:00:00.559) 0:05:20.684 ******* 2025-02-10 09:33:02.741793 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.741801 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.741810 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.741818 | orchestrator | 2025-02-10 09:33:02.741826 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-10 09:33:02.741835 | orchestrator | Monday 10 February 2025 09:23:49 +0000 (0:00:00.480) 0:05:21.164 ******* 2025-02-10 09:33:02.741843 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-10 09:33:02.741851 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.741860 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-10 09:33:02.741868 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.741876 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-10 09:33:02.741885 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.741893 | orchestrator | 2025-02-10 09:33:02.741901 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-10 09:33:02.741910 | orchestrator | Monday 10 February 2025 09:23:50 +0000 (0:00:00.861) 0:05:22.026 ******* 2025-02-10 09:33:02.741923 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.741970 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.741981 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.741989 | orchestrator | 2025-02-10 09:33:02.741997 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:33:02.742005 | orchestrator | Monday 10 February 2025 09:23:51 +0000 (0:00:01.024) 0:05:23.051 ******* 2025-02-10 09:33:02.742012 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.742042 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.742051 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.742061 | orchestrator | 2025-02-10 09:33:02.742069 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-10 09:33:02.742078 | orchestrator | Monday 10 February 2025 09:23:51 +0000 (0:00:00.471) 0:05:23.523 ******* 2025-02-10 09:33:02.742088 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-10 09:33:02.742098 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.742107 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-10 09:33:02.742116 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.742125 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-10 09:33:02.742134 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.742143 | orchestrator | 2025-02-10 09:33:02.742153 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-10 09:33:02.742162 | orchestrator | Monday 10 February 2025 09:23:52 +0000 (0:00:00.773) 0:05:24.297 ******* 2025-02-10 09:33:02.742171 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.742180 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.742189 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.742198 | orchestrator | 2025-02-10 09:33:02.742234 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-10 09:33:02.742244 | orchestrator | Monday 10 February 2025 09:23:53 +0000 (0:00:00.935) 0:05:25.233 ******* 2025-02-10 09:33:02.742254 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-10 09:33:02.742262 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-10 09:33:02.742270 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-10 09:33:02.742278 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-02-10 09:33:02.742286 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-02-10 09:33:02.742294 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-02-10 09:33:02.742302 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.742310 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.742317 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-02-10 09:33:02.742325 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-02-10 09:33:02.742333 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-02-10 09:33:02.742341 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.742349 | orchestrator | 2025-02-10 09:33:02.742357 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-02-10 09:33:02.742365 | orchestrator | Monday 10 February 2025 09:23:54 +0000 (0:00:00.830) 0:05:26.063 ******* 2025-02-10 09:33:02.742373 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.742381 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.742388 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.742396 | orchestrator | 2025-02-10 09:33:02.742404 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-02-10 09:33:02.742412 | orchestrator | Monday 10 February 2025 09:23:55 +0000 (0:00:00.965) 0:05:27.029 ******* 2025-02-10 09:33:02.742420 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.742428 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.742436 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.742444 | orchestrator | 2025-02-10 09:33:02.742452 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-02-10 09:33:02.742465 | orchestrator | Monday 10 February 2025 09:23:56 +0000 (0:00:00.837) 0:05:27.866 ******* 2025-02-10 09:33:02.742473 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.742481 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.742489 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.742497 | orchestrator | 2025-02-10 09:33:02.742512 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-02-10 09:33:02.742521 | orchestrator | Monday 10 February 2025 09:23:57 +0000 (0:00:01.047) 0:05:28.914 ******* 2025-02-10 09:33:02.742529 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.742537 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.742545 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.742553 | orchestrator | 2025-02-10 09:33:02.742561 | orchestrator | TASK [ceph-mon : set_fact container_exec_cmd] ********************************** 2025-02-10 09:33:02.742569 | orchestrator | Monday 10 February 2025 09:23:57 +0000 (0:00:00.886) 0:05:29.801 ******* 2025-02-10 09:33:02.742577 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.742585 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.742597 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.742605 | orchestrator | 2025-02-10 09:33:02.742613 | orchestrator | TASK [ceph-mon : include deploy_monitors.yml] ********************************** 2025-02-10 09:33:02.742621 | orchestrator | Monday 10 February 2025 09:23:58 +0000 (0:00:00.523) 0:05:30.325 ******* 2025-02-10 09:33:02.742628 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:33:02.742637 | orchestrator | 2025-02-10 09:33:02.742644 | orchestrator | TASK [ceph-mon : check if monitor initial keyring already exists] ************** 2025-02-10 09:33:02.742652 | orchestrator | Monday 10 February 2025 09:23:59 +0000 (0:00:01.111) 0:05:31.437 ******* 2025-02-10 09:33:02.742660 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.742668 | orchestrator | 2025-02-10 09:33:02.742676 | orchestrator | TASK [ceph-mon : generate monitor initial keyring] ***************************** 2025-02-10 09:33:02.742684 | orchestrator | Monday 10 February 2025 09:23:59 +0000 (0:00:00.255) 0:05:31.692 ******* 2025-02-10 09:33:02.742692 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-02-10 09:33:02.742700 | orchestrator | 2025-02-10 09:33:02.742708 | orchestrator | TASK [ceph-mon : set_fact _initial_mon_key_success] **************************** 2025-02-10 09:33:02.742716 | orchestrator | Monday 10 February 2025 09:24:00 +0000 (0:00:00.983) 0:05:32.675 ******* 2025-02-10 09:33:02.742724 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.742732 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.742740 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.742748 | orchestrator | 2025-02-10 09:33:02.742756 | orchestrator | TASK [ceph-mon : get initial keyring when it already exists] ******************* 2025-02-10 09:33:02.742764 | orchestrator | Monday 10 February 2025 09:24:01 +0000 (0:00:00.578) 0:05:33.254 ******* 2025-02-10 09:33:02.742772 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.742780 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.742788 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.742795 | orchestrator | 2025-02-10 09:33:02.742803 | orchestrator | TASK [ceph-mon : create monitor initial keyring] ******************************* 2025-02-10 09:33:02.742811 | orchestrator | Monday 10 February 2025 09:24:02 +0000 (0:00:01.002) 0:05:34.256 ******* 2025-02-10 09:33:02.742819 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:02.742827 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:33:02.742835 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:33:02.742843 | orchestrator | 2025-02-10 09:33:02.742851 | orchestrator | TASK [ceph-mon : copy the initial key in /etc/ceph (for containers)] *********** 2025-02-10 09:33:02.742859 | orchestrator | Monday 10 February 2025 09:24:04 +0000 (0:00:01.584) 0:05:35.841 ******* 2025-02-10 09:33:02.742867 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:02.742875 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:33:02.742883 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:33:02.742891 | orchestrator | 2025-02-10 09:33:02.742899 | orchestrator | TASK [ceph-mon : create monitor directory] ************************************* 2025-02-10 09:33:02.742947 | orchestrator | Monday 10 February 2025 09:24:05 +0000 (0:00:01.332) 0:05:37.173 ******* 2025-02-10 09:33:02.742958 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:02.742966 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:33:02.742974 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:33:02.742982 | orchestrator | 2025-02-10 09:33:02.742990 | orchestrator | TASK [ceph-mon : recursively fix ownership of monitor directory] *************** 2025-02-10 09:33:02.742998 | orchestrator | Monday 10 February 2025 09:24:06 +0000 (0:00:01.057) 0:05:38.231 ******* 2025-02-10 09:33:02.743005 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.743013 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.743021 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.743029 | orchestrator | 2025-02-10 09:33:02.743037 | orchestrator | TASK [ceph-mon : create custom admin keyring] ********************************** 2025-02-10 09:33:02.743045 | orchestrator | Monday 10 February 2025 09:24:07 +0000 (0:00:01.478) 0:05:39.709 ******* 2025-02-10 09:33:02.743053 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.743061 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.743069 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.743077 | orchestrator | 2025-02-10 09:33:02.743085 | orchestrator | TASK [ceph-mon : set_fact ceph-authtool container command] ********************* 2025-02-10 09:33:02.743093 | orchestrator | Monday 10 February 2025 09:24:08 +0000 (0:00:00.505) 0:05:40.215 ******* 2025-02-10 09:33:02.743100 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.743108 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.743116 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.743124 | orchestrator | 2025-02-10 09:33:02.743132 | orchestrator | TASK [ceph-mon : import admin keyring into mon keyring] ************************ 2025-02-10 09:33:02.743140 | orchestrator | Monday 10 February 2025 09:24:08 +0000 (0:00:00.516) 0:05:40.731 ******* 2025-02-10 09:33:02.743148 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.743155 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.743163 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.743171 | orchestrator | 2025-02-10 09:33:02.743179 | orchestrator | TASK [ceph-mon : set_fact ceph-mon container command] ************************** 2025-02-10 09:33:02.743187 | orchestrator | Monday 10 February 2025 09:24:09 +0000 (0:00:00.600) 0:05:41.331 ******* 2025-02-10 09:33:02.743195 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.743203 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.743211 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.743218 | orchestrator | 2025-02-10 09:33:02.743226 | orchestrator | TASK [ceph-mon : ceph monitor mkfs with keyring] ******************************* 2025-02-10 09:33:02.743238 | orchestrator | Monday 10 February 2025 09:24:10 +0000 (0:00:01.127) 0:05:42.459 ******* 2025-02-10 09:33:02.743246 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:02.743254 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:33:02.743261 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:33:02.743269 | orchestrator | 2025-02-10 09:33:02.743277 | orchestrator | TASK [ceph-mon : ceph monitor mkfs without keyring] **************************** 2025-02-10 09:33:02.743285 | orchestrator | Monday 10 February 2025 09:24:12 +0000 (0:00:01.476) 0:05:43.935 ******* 2025-02-10 09:33:02.743293 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.743301 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.743309 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.743316 | orchestrator | 2025-02-10 09:33:02.743324 | orchestrator | TASK [ceph-mon : include start_monitor.yml] ************************************ 2025-02-10 09:33:02.743332 | orchestrator | Monday 10 February 2025 09:24:12 +0000 (0:00:00.453) 0:05:44.389 ******* 2025-02-10 09:33:02.743341 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:33:02.743348 | orchestrator | 2025-02-10 09:33:02.743356 | orchestrator | TASK [ceph-mon : ensure systemd service override directory exists] ************* 2025-02-10 09:33:02.743364 | orchestrator | Monday 10 February 2025 09:24:13 +0000 (0:00:00.978) 0:05:45.367 ******* 2025-02-10 09:33:02.743377 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.743385 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.743392 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.743400 | orchestrator | 2025-02-10 09:33:02.743408 | orchestrator | TASK [ceph-mon : add ceph-mon systemd service overrides] *********************** 2025-02-10 09:33:02.743416 | orchestrator | Monday 10 February 2025 09:24:13 +0000 (0:00:00.469) 0:05:45.837 ******* 2025-02-10 09:33:02.743424 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.743432 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.743439 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.743447 | orchestrator | 2025-02-10 09:33:02.743455 | orchestrator | TASK [ceph-mon : include_tasks systemd.yml] ************************************ 2025-02-10 09:33:02.743463 | orchestrator | Monday 10 February 2025 09:24:14 +0000 (0:00:00.422) 0:05:46.260 ******* 2025-02-10 09:33:02.743471 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:33:02.743479 | orchestrator | 2025-02-10 09:33:02.743487 | orchestrator | TASK [ceph-mon : generate systemd unit file for mon container] ***************** 2025-02-10 09:33:02.743495 | orchestrator | Monday 10 February 2025 09:24:15 +0000 (0:00:01.015) 0:05:47.275 ******* 2025-02-10 09:33:02.743503 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:02.743511 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:33:02.743519 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:33:02.743527 | orchestrator | 2025-02-10 09:33:02.743545 | orchestrator | TASK [ceph-mon : generate systemd ceph-mon target file] ************************ 2025-02-10 09:33:02.743553 | orchestrator | Monday 10 February 2025 09:24:17 +0000 (0:00:01.692) 0:05:48.968 ******* 2025-02-10 09:33:02.743561 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:02.743569 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:33:02.743577 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:33:02.743585 | orchestrator | 2025-02-10 09:33:02.743593 | orchestrator | TASK [ceph-mon : enable ceph-mon.target] *************************************** 2025-02-10 09:33:02.743601 | orchestrator | Monday 10 February 2025 09:24:18 +0000 (0:00:01.414) 0:05:50.383 ******* 2025-02-10 09:33:02.743609 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:02.743628 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:33:02.743644 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:33:02.743652 | orchestrator | 2025-02-10 09:33:02.743660 | orchestrator | TASK [ceph-mon : start the monitor service] ************************************ 2025-02-10 09:33:02.743692 | orchestrator | Monday 10 February 2025 09:24:20 +0000 (0:00:02.250) 0:05:52.634 ******* 2025-02-10 09:33:02.743701 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:02.743709 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:33:02.743717 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:33:02.743725 | orchestrator | 2025-02-10 09:33:02.743733 | orchestrator | TASK [ceph-mon : include_tasks ceph_keys.yml] ********************************** 2025-02-10 09:33:02.743741 | orchestrator | Monday 10 February 2025 09:24:23 +0000 (0:00:02.436) 0:05:55.071 ******* 2025-02-10 09:33:02.743749 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:33:02.743757 | orchestrator | 2025-02-10 09:33:02.743765 | orchestrator | TASK [ceph-mon : waiting for the monitor(s) to form the quorum...] ************* 2025-02-10 09:33:02.743774 | orchestrator | Monday 10 February 2025 09:24:24 +0000 (0:00:01.178) 0:05:56.249 ******* 2025-02-10 09:33:02.743782 | orchestrator | FAILED - RETRYING: [testbed-node-0]: waiting for the monitor(s) to form the quorum... (10 retries left). 2025-02-10 09:33:02.743790 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.743797 | orchestrator | 2025-02-10 09:33:02.743805 | orchestrator | TASK [ceph-mon : fetch ceph initial keys] ************************************** 2025-02-10 09:33:02.743813 | orchestrator | Monday 10 February 2025 09:24:45 +0000 (0:00:21.482) 0:06:17.731 ******* 2025-02-10 09:33:02.743821 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.743829 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.743842 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.743851 | orchestrator | 2025-02-10 09:33:02.743859 | orchestrator | TASK [ceph-mon : include secure_cluster.yml] *********************************** 2025-02-10 09:33:02.743867 | orchestrator | Monday 10 February 2025 09:24:53 +0000 (0:00:07.665) 0:06:25.396 ******* 2025-02-10 09:33:02.743875 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.743883 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.743891 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.743899 | orchestrator | 2025-02-10 09:33:02.743907 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-02-10 09:33:02.743915 | orchestrator | Monday 10 February 2025 09:24:54 +0000 (0:00:01.345) 0:06:26.742 ******* 2025-02-10 09:33:02.743922 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:02.743946 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:33:02.743960 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:33:02.743975 | orchestrator | 2025-02-10 09:33:02.743988 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-02-10 09:33:02.744000 | orchestrator | Monday 10 February 2025 09:24:55 +0000 (0:00:00.847) 0:06:27.589 ******* 2025-02-10 09:33:02.744007 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:33:02.744015 | orchestrator | 2025-02-10 09:33:02.744023 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-02-10 09:33:02.744035 | orchestrator | Monday 10 February 2025 09:24:56 +0000 (0:00:01.073) 0:06:28.663 ******* 2025-02-10 09:33:02.744043 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.744051 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.744059 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.744067 | orchestrator | 2025-02-10 09:33:02.744075 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-02-10 09:33:02.744083 | orchestrator | Monday 10 February 2025 09:24:57 +0000 (0:00:00.448) 0:06:29.112 ******* 2025-02-10 09:33:02.744091 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:02.744098 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:33:02.744106 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:33:02.744114 | orchestrator | 2025-02-10 09:33:02.744122 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-02-10 09:33:02.744130 | orchestrator | Monday 10 February 2025 09:24:59 +0000 (0:00:01.756) 0:06:30.868 ******* 2025-02-10 09:33:02.744138 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-10 09:33:02.744146 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-10 09:33:02.744154 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-10 09:33:02.744161 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.744169 | orchestrator | 2025-02-10 09:33:02.744177 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-02-10 09:33:02.744185 | orchestrator | Monday 10 February 2025 09:24:59 +0000 (0:00:00.820) 0:06:31.689 ******* 2025-02-10 09:33:02.744193 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.744200 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.744208 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.744216 | orchestrator | 2025-02-10 09:33:02.744224 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-02-10 09:33:02.744232 | orchestrator | Monday 10 February 2025 09:25:00 +0000 (0:00:00.497) 0:06:32.186 ******* 2025-02-10 09:33:02.744240 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:02.744247 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:33:02.744255 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:33:02.744263 | orchestrator | 2025-02-10 09:33:02.744271 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-02-10 09:33:02.744279 | orchestrator | 2025-02-10 09:33:02.744286 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-02-10 09:33:02.744303 | orchestrator | Monday 10 February 2025 09:25:02 +0000 (0:00:02.644) 0:06:34.830 ******* 2025-02-10 09:33:02.744317 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:33:02.744326 | orchestrator | 2025-02-10 09:33:02.744334 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-02-10 09:33:02.744342 | orchestrator | Monday 10 February 2025 09:25:03 +0000 (0:00:00.980) 0:06:35.810 ******* 2025-02-10 09:33:02.744350 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.744365 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.744381 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.744389 | orchestrator | 2025-02-10 09:33:02.744397 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-02-10 09:33:02.744405 | orchestrator | Monday 10 February 2025 09:25:04 +0000 (0:00:00.805) 0:06:36.616 ******* 2025-02-10 09:33:02.744438 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.744449 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.744458 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.744466 | orchestrator | 2025-02-10 09:33:02.744475 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-02-10 09:33:02.744484 | orchestrator | Monday 10 February 2025 09:25:05 +0000 (0:00:00.630) 0:06:37.247 ******* 2025-02-10 09:33:02.744492 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.744501 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.744509 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.744518 | orchestrator | 2025-02-10 09:33:02.744526 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-02-10 09:33:02.744535 | orchestrator | Monday 10 February 2025 09:25:05 +0000 (0:00:00.354) 0:06:37.601 ******* 2025-02-10 09:33:02.744543 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.744552 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.744560 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.744569 | orchestrator | 2025-02-10 09:33:02.744577 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-02-10 09:33:02.744586 | orchestrator | Monday 10 February 2025 09:25:06 +0000 (0:00:00.376) 0:06:37.978 ******* 2025-02-10 09:33:02.744595 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.744603 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.744612 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.744620 | orchestrator | 2025-02-10 09:33:02.744629 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-02-10 09:33:02.744637 | orchestrator | Monday 10 February 2025 09:25:06 +0000 (0:00:00.759) 0:06:38.737 ******* 2025-02-10 09:33:02.744646 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.744654 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.744663 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.744672 | orchestrator | 2025-02-10 09:33:02.744680 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-02-10 09:33:02.744689 | orchestrator | Monday 10 February 2025 09:25:07 +0000 (0:00:00.655) 0:06:39.393 ******* 2025-02-10 09:33:02.744697 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.744706 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.744715 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.744723 | orchestrator | 2025-02-10 09:33:02.744732 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-02-10 09:33:02.744740 | orchestrator | Monday 10 February 2025 09:25:07 +0000 (0:00:00.375) 0:06:39.768 ******* 2025-02-10 09:33:02.744749 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.744758 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.744770 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.744779 | orchestrator | 2025-02-10 09:33:02.744788 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-02-10 09:33:02.744797 | orchestrator | Monday 10 February 2025 09:25:08 +0000 (0:00:00.407) 0:06:40.175 ******* 2025-02-10 09:33:02.744805 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.744814 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.744828 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.744837 | orchestrator | 2025-02-10 09:33:02.744845 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-02-10 09:33:02.744854 | orchestrator | Monday 10 February 2025 09:25:08 +0000 (0:00:00.354) 0:06:40.530 ******* 2025-02-10 09:33:02.744863 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.744871 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.744880 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.744888 | orchestrator | 2025-02-10 09:33:02.744900 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-02-10 09:33:02.744909 | orchestrator | Monday 10 February 2025 09:25:09 +0000 (0:00:00.625) 0:06:41.155 ******* 2025-02-10 09:33:02.744917 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.744926 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.744966 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.744975 | orchestrator | 2025-02-10 09:33:02.744984 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-02-10 09:33:02.744993 | orchestrator | Monday 10 February 2025 09:25:10 +0000 (0:00:00.883) 0:06:42.038 ******* 2025-02-10 09:33:02.745001 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.745010 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.745018 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.745026 | orchestrator | 2025-02-10 09:33:02.745035 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-02-10 09:33:02.745043 | orchestrator | Monday 10 February 2025 09:25:10 +0000 (0:00:00.404) 0:06:42.443 ******* 2025-02-10 09:33:02.745052 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.745060 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.745069 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.745077 | orchestrator | 2025-02-10 09:33:02.745086 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-02-10 09:33:02.745094 | orchestrator | Monday 10 February 2025 09:25:10 +0000 (0:00:00.382) 0:06:42.826 ******* 2025-02-10 09:33:02.745102 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.745111 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.745120 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.745128 | orchestrator | 2025-02-10 09:33:02.745137 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-02-10 09:33:02.745145 | orchestrator | Monday 10 February 2025 09:25:11 +0000 (0:00:00.629) 0:06:43.455 ******* 2025-02-10 09:33:02.745163 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.745172 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.745180 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.745189 | orchestrator | 2025-02-10 09:33:02.745197 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-02-10 09:33:02.745206 | orchestrator | Monday 10 February 2025 09:25:12 +0000 (0:00:00.389) 0:06:43.845 ******* 2025-02-10 09:33:02.745215 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.745223 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.745232 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.745240 | orchestrator | 2025-02-10 09:33:02.745249 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-02-10 09:33:02.745258 | orchestrator | Monday 10 February 2025 09:25:12 +0000 (0:00:00.338) 0:06:44.184 ******* 2025-02-10 09:33:02.745289 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.745299 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.745308 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.745316 | orchestrator | 2025-02-10 09:33:02.745325 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-02-10 09:33:02.745333 | orchestrator | Monday 10 February 2025 09:25:12 +0000 (0:00:00.346) 0:06:44.531 ******* 2025-02-10 09:33:02.745342 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.745350 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.745359 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.745375 | orchestrator | 2025-02-10 09:33:02.745384 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-02-10 09:33:02.745393 | orchestrator | Monday 10 February 2025 09:25:13 +0000 (0:00:00.693) 0:06:45.224 ******* 2025-02-10 09:33:02.745401 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.745410 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.745419 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.745427 | orchestrator | 2025-02-10 09:33:02.745436 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-02-10 09:33:02.745445 | orchestrator | Monday 10 February 2025 09:25:13 +0000 (0:00:00.419) 0:06:45.644 ******* 2025-02-10 09:33:02.745453 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.745462 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.745471 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.745479 | orchestrator | 2025-02-10 09:33:02.745488 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-02-10 09:33:02.745497 | orchestrator | Monday 10 February 2025 09:25:14 +0000 (0:00:00.357) 0:06:46.001 ******* 2025-02-10 09:33:02.745511 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.745520 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.745529 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.745537 | orchestrator | 2025-02-10 09:33:02.745546 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-02-10 09:33:02.745555 | orchestrator | Monday 10 February 2025 09:25:14 +0000 (0:00:00.380) 0:06:46.382 ******* 2025-02-10 09:33:02.745563 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.745572 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.745580 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.745589 | orchestrator | 2025-02-10 09:33:02.745598 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-02-10 09:33:02.745606 | orchestrator | Monday 10 February 2025 09:25:15 +0000 (0:00:00.730) 0:06:47.113 ******* 2025-02-10 09:33:02.745615 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.745624 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.745632 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.745640 | orchestrator | 2025-02-10 09:33:02.745649 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-02-10 09:33:02.745658 | orchestrator | Monday 10 February 2025 09:25:15 +0000 (0:00:00.401) 0:06:47.514 ******* 2025-02-10 09:33:02.745666 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.745675 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.745683 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.745692 | orchestrator | 2025-02-10 09:33:02.745700 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-02-10 09:33:02.745709 | orchestrator | Monday 10 February 2025 09:25:16 +0000 (0:00:00.372) 0:06:47.887 ******* 2025-02-10 09:33:02.745717 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.745726 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.745734 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.745748 | orchestrator | 2025-02-10 09:33:02.745757 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-02-10 09:33:02.745766 | orchestrator | Monday 10 February 2025 09:25:16 +0000 (0:00:00.361) 0:06:48.248 ******* 2025-02-10 09:33:02.745775 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.745784 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.745792 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.745801 | orchestrator | 2025-02-10 09:33:02.745815 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-02-10 09:33:02.745825 | orchestrator | Monday 10 February 2025 09:25:17 +0000 (0:00:00.833) 0:06:49.082 ******* 2025-02-10 09:33:02.745833 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.745842 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.745850 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.745859 | orchestrator | 2025-02-10 09:33:02.745875 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-02-10 09:33:02.745884 | orchestrator | Monday 10 February 2025 09:25:17 +0000 (0:00:00.377) 0:06:49.459 ******* 2025-02-10 09:33:02.745893 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.745901 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.745910 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.745918 | orchestrator | 2025-02-10 09:33:02.745927 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-02-10 09:33:02.745956 | orchestrator | Monday 10 February 2025 09:25:18 +0000 (0:00:00.381) 0:06:49.841 ******* 2025-02-10 09:33:02.745965 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.745974 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.745982 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.745991 | orchestrator | 2025-02-10 09:33:02.745999 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-02-10 09:33:02.746008 | orchestrator | Monday 10 February 2025 09:25:18 +0000 (0:00:00.397) 0:06:50.238 ******* 2025-02-10 09:33:02.746038 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.746048 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.746057 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.746066 | orchestrator | 2025-02-10 09:33:02.746074 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-02-10 09:33:02.746083 | orchestrator | Monday 10 February 2025 09:25:19 +0000 (0:00:00.672) 0:06:50.911 ******* 2025-02-10 09:33:02.746092 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.746100 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.746109 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.746117 | orchestrator | 2025-02-10 09:33:02.746149 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-02-10 09:33:02.746159 | orchestrator | Monday 10 February 2025 09:25:19 +0000 (0:00:00.365) 0:06:51.277 ******* 2025-02-10 09:33:02.746168 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.746176 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.746185 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.746193 | orchestrator | 2025-02-10 09:33:02.746202 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-02-10 09:33:02.746215 | orchestrator | Monday 10 February 2025 09:25:19 +0000 (0:00:00.348) 0:06:51.625 ******* 2025-02-10 09:33:02.746224 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-10 09:33:02.746232 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-10 09:33:02.746241 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-10 09:33:02.746250 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-10 09:33:02.746258 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.746267 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.746275 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-10 09:33:02.746284 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-10 09:33:02.746292 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.746301 | orchestrator | 2025-02-10 09:33:02.746310 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-02-10 09:33:02.746318 | orchestrator | Monday 10 February 2025 09:25:20 +0000 (0:00:00.419) 0:06:52.045 ******* 2025-02-10 09:33:02.746327 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-02-10 09:33:02.746346 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-02-10 09:33:02.746354 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.746363 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-02-10 09:33:02.746372 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-02-10 09:33:02.746380 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.746389 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-02-10 09:33:02.746404 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-02-10 09:33:02.746413 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.746422 | orchestrator | 2025-02-10 09:33:02.746430 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-02-10 09:33:02.746439 | orchestrator | Monday 10 February 2025 09:25:20 +0000 (0:00:00.724) 0:06:52.770 ******* 2025-02-10 09:33:02.746447 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.746456 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.746465 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.746473 | orchestrator | 2025-02-10 09:33:02.746482 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-02-10 09:33:02.746490 | orchestrator | Monday 10 February 2025 09:25:21 +0000 (0:00:00.411) 0:06:53.182 ******* 2025-02-10 09:33:02.746499 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.746508 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.746516 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.746525 | orchestrator | 2025-02-10 09:33:02.746533 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-10 09:33:02.746542 | orchestrator | Monday 10 February 2025 09:25:21 +0000 (0:00:00.414) 0:06:53.596 ******* 2025-02-10 09:33:02.746550 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.746559 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.746567 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.746576 | orchestrator | 2025-02-10 09:33:02.746585 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-10 09:33:02.746593 | orchestrator | Monday 10 February 2025 09:25:22 +0000 (0:00:00.385) 0:06:53.982 ******* 2025-02-10 09:33:02.746601 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.746610 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.746618 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.746627 | orchestrator | 2025-02-10 09:33:02.746635 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-10 09:33:02.746644 | orchestrator | Monday 10 February 2025 09:25:22 +0000 (0:00:00.687) 0:06:54.669 ******* 2025-02-10 09:33:02.746653 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.746661 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.746670 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.746678 | orchestrator | 2025-02-10 09:33:02.746687 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-10 09:33:02.746696 | orchestrator | Monday 10 February 2025 09:25:23 +0000 (0:00:00.446) 0:06:55.116 ******* 2025-02-10 09:33:02.746704 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.746713 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.746722 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.746730 | orchestrator | 2025-02-10 09:33:02.746739 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-10 09:33:02.746747 | orchestrator | Monday 10 February 2025 09:25:23 +0000 (0:00:00.475) 0:06:55.591 ******* 2025-02-10 09:33:02.746756 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-10 09:33:02.746765 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-10 09:33:02.746773 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-10 09:33:02.746782 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.746790 | orchestrator | 2025-02-10 09:33:02.746799 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-10 09:33:02.746807 | orchestrator | Monday 10 February 2025 09:25:24 +0000 (0:00:00.781) 0:06:56.373 ******* 2025-02-10 09:33:02.746816 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-10 09:33:02.746824 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-10 09:33:02.746833 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-10 09:33:02.746841 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.746855 | orchestrator | 2025-02-10 09:33:02.746864 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-10 09:33:02.746904 | orchestrator | Monday 10 February 2025 09:25:25 +0000 (0:00:01.305) 0:06:57.678 ******* 2025-02-10 09:33:02.746915 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-10 09:33:02.746924 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-10 09:33:02.746975 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-10 09:33:02.746986 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.746995 | orchestrator | 2025-02-10 09:33:02.747004 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:33:02.747012 | orchestrator | Monday 10 February 2025 09:25:26 +0000 (0:00:00.472) 0:06:58.151 ******* 2025-02-10 09:33:02.747021 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.747029 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.747043 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.747051 | orchestrator | 2025-02-10 09:33:02.747063 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-10 09:33:02.747072 | orchestrator | Monday 10 February 2025 09:25:26 +0000 (0:00:00.373) 0:06:58.525 ******* 2025-02-10 09:33:02.747080 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-10 09:33:02.747087 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.747095 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-10 09:33:02.747103 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.747111 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-10 09:33:02.747119 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.747127 | orchestrator | 2025-02-10 09:33:02.747134 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-10 09:33:02.747142 | orchestrator | Monday 10 February 2025 09:25:27 +0000 (0:00:00.641) 0:06:59.166 ******* 2025-02-10 09:33:02.747150 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.747158 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.747166 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.747174 | orchestrator | 2025-02-10 09:33:02.747181 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:33:02.747189 | orchestrator | Monday 10 February 2025 09:25:27 +0000 (0:00:00.400) 0:06:59.566 ******* 2025-02-10 09:33:02.747197 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.747205 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.747213 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.747220 | orchestrator | 2025-02-10 09:33:02.747228 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-10 09:33:02.747236 | orchestrator | Monday 10 February 2025 09:25:28 +0000 (0:00:00.730) 0:07:00.297 ******* 2025-02-10 09:33:02.747245 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-10 09:33:02.747253 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.747260 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-10 09:33:02.747268 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.747276 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-10 09:33:02.747284 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.747292 | orchestrator | 2025-02-10 09:33:02.747300 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-10 09:33:02.747307 | orchestrator | Monday 10 February 2025 09:25:29 +0000 (0:00:00.573) 0:07:00.870 ******* 2025-02-10 09:33:02.747315 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.747323 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.747331 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.747339 | orchestrator | 2025-02-10 09:33:02.747347 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-10 09:33:02.747354 | orchestrator | Monday 10 February 2025 09:25:29 +0000 (0:00:00.380) 0:07:01.250 ******* 2025-02-10 09:33:02.747362 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-10 09:33:02.747375 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-10 09:33:02.747383 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-10 09:33:02.747391 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.747399 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-02-10 09:33:02.747407 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-02-10 09:33:02.747414 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-02-10 09:33:02.747422 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.747430 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-02-10 09:33:02.747438 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-02-10 09:33:02.747446 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-02-10 09:33:02.747453 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.747465 | orchestrator | 2025-02-10 09:33:02.747473 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-02-10 09:33:02.747481 | orchestrator | Monday 10 February 2025 09:25:30 +0000 (0:00:00.995) 0:07:02.246 ******* 2025-02-10 09:33:02.747489 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.747497 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.747515 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.747523 | orchestrator | 2025-02-10 09:33:02.747531 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-02-10 09:33:02.747539 | orchestrator | Monday 10 February 2025 09:25:31 +0000 (0:00:00.601) 0:07:02.847 ******* 2025-02-10 09:33:02.747546 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.747554 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.747570 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.747578 | orchestrator | 2025-02-10 09:33:02.747586 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-02-10 09:33:02.747594 | orchestrator | Monday 10 February 2025 09:25:31 +0000 (0:00:00.858) 0:07:03.705 ******* 2025-02-10 09:33:02.747602 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.747610 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.747618 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.747626 | orchestrator | 2025-02-10 09:33:02.747634 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-02-10 09:33:02.747665 | orchestrator | Monday 10 February 2025 09:25:32 +0000 (0:00:00.602) 0:07:04.308 ******* 2025-02-10 09:33:02.747675 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.747683 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.747691 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.747699 | orchestrator | 2025-02-10 09:33:02.747707 | orchestrator | TASK [ceph-mgr : set_fact container_exec_cmd] ********************************** 2025-02-10 09:33:02.747715 | orchestrator | Monday 10 February 2025 09:25:33 +0000 (0:00:00.991) 0:07:05.300 ******* 2025-02-10 09:33:02.747723 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-02-10 09:33:02.747731 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-10 09:33:02.747739 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-10 09:33:02.747747 | orchestrator | 2025-02-10 09:33:02.747755 | orchestrator | TASK [ceph-mgr : include common.yml] ******************************************* 2025-02-10 09:33:02.747763 | orchestrator | Monday 10 February 2025 09:25:34 +0000 (0:00:00.711) 0:07:06.011 ******* 2025-02-10 09:33:02.747771 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:33:02.747779 | orchestrator | 2025-02-10 09:33:02.747787 | orchestrator | TASK [ceph-mgr : create mgr directory] ***************************************** 2025-02-10 09:33:02.747795 | orchestrator | Monday 10 February 2025 09:25:34 +0000 (0:00:00.626) 0:07:06.638 ******* 2025-02-10 09:33:02.747803 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:02.747816 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:33:02.747824 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:33:02.747833 | orchestrator | 2025-02-10 09:33:02.747841 | orchestrator | TASK [ceph-mgr : fetch ceph mgr keyring] *************************************** 2025-02-10 09:33:02.747848 | orchestrator | Monday 10 February 2025 09:25:35 +0000 (0:00:01.081) 0:07:07.719 ******* 2025-02-10 09:33:02.747856 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.747864 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.747872 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.747880 | orchestrator | 2025-02-10 09:33:02.747888 | orchestrator | TASK [ceph-mgr : create ceph mgr keyring(s) on a mon node] ********************* 2025-02-10 09:33:02.747896 | orchestrator | Monday 10 February 2025 09:25:36 +0000 (0:00:00.376) 0:07:08.096 ******* 2025-02-10 09:33:02.747904 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-02-10 09:33:02.747918 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-02-10 09:33:02.747926 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-02-10 09:33:02.747950 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-02-10 09:33:02.747965 | orchestrator | 2025-02-10 09:33:02.747978 | orchestrator | TASK [ceph-mgr : set_fact _mgr_keys] ******************************************* 2025-02-10 09:33:02.747991 | orchestrator | Monday 10 February 2025 09:25:44 +0000 (0:00:08.513) 0:07:16.609 ******* 2025-02-10 09:33:02.747999 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.748007 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.748015 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.748023 | orchestrator | 2025-02-10 09:33:02.748031 | orchestrator | TASK [ceph-mgr : get keys from monitors] *************************************** 2025-02-10 09:33:02.748039 | orchestrator | Monday 10 February 2025 09:25:45 +0000 (0:00:00.508) 0:07:17.118 ******* 2025-02-10 09:33:02.748046 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-02-10 09:33:02.748054 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-02-10 09:33:02.748062 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-02-10 09:33:02.748070 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-02-10 09:33:02.748078 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:33:02.748086 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:33:02.748093 | orchestrator | 2025-02-10 09:33:02.748101 | orchestrator | TASK [ceph-mgr : copy ceph key(s) if needed] *********************************** 2025-02-10 09:33:02.748109 | orchestrator | Monday 10 February 2025 09:25:47 +0000 (0:00:02.033) 0:07:19.152 ******* 2025-02-10 09:33:02.748117 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-02-10 09:33:02.748125 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-02-10 09:33:02.748133 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-02-10 09:33:02.748141 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-02-10 09:33:02.748148 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-02-10 09:33:02.748156 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-02-10 09:33:02.748164 | orchestrator | 2025-02-10 09:33:02.748172 | orchestrator | TASK [ceph-mgr : set mgr key permissions] ************************************** 2025-02-10 09:33:02.748180 | orchestrator | Monday 10 February 2025 09:25:48 +0000 (0:00:01.387) 0:07:20.539 ******* 2025-02-10 09:33:02.748187 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.748195 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.748203 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.748211 | orchestrator | 2025-02-10 09:33:02.748218 | orchestrator | TASK [ceph-mgr : append dashboard modules to ceph_mgr_modules] ***************** 2025-02-10 09:33:02.748226 | orchestrator | Monday 10 February 2025 09:25:49 +0000 (0:00:01.055) 0:07:21.595 ******* 2025-02-10 09:33:02.748234 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.748242 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.748250 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.748258 | orchestrator | 2025-02-10 09:33:02.748266 | orchestrator | TASK [ceph-mgr : include pre_requisite.yml] ************************************ 2025-02-10 09:33:02.748279 | orchestrator | Monday 10 February 2025 09:25:50 +0000 (0:00:00.344) 0:07:21.940 ******* 2025-02-10 09:33:02.748287 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.748295 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.748307 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.748315 | orchestrator | 2025-02-10 09:33:02.748323 | orchestrator | TASK [ceph-mgr : include start_mgr.yml] **************************************** 2025-02-10 09:33:02.748331 | orchestrator | Monday 10 February 2025 09:25:50 +0000 (0:00:00.404) 0:07:22.344 ******* 2025-02-10 09:33:02.748362 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:33:02.748371 | orchestrator | 2025-02-10 09:33:02.748379 | orchestrator | TASK [ceph-mgr : ensure systemd service override directory exists] ************* 2025-02-10 09:33:02.748387 | orchestrator | Monday 10 February 2025 09:25:51 +0000 (0:00:00.934) 0:07:23.279 ******* 2025-02-10 09:33:02.748395 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.748407 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.748415 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.748423 | orchestrator | 2025-02-10 09:33:02.748431 | orchestrator | TASK [ceph-mgr : add ceph-mgr systemd service overrides] *********************** 2025-02-10 09:33:02.748438 | orchestrator | Monday 10 February 2025 09:25:51 +0000 (0:00:00.422) 0:07:23.701 ******* 2025-02-10 09:33:02.748456 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.748464 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.748472 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.748480 | orchestrator | 2025-02-10 09:33:02.748488 | orchestrator | TASK [ceph-mgr : include_tasks systemd.yml] ************************************ 2025-02-10 09:33:02.748496 | orchestrator | Monday 10 February 2025 09:25:52 +0000 (0:00:00.381) 0:07:24.082 ******* 2025-02-10 09:33:02.748504 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:33:02.748512 | orchestrator | 2025-02-10 09:33:02.748520 | orchestrator | TASK [ceph-mgr : generate systemd unit file] *********************************** 2025-02-10 09:33:02.748527 | orchestrator | Monday 10 February 2025 09:25:53 +0000 (0:00:00.954) 0:07:25.037 ******* 2025-02-10 09:33:02.748535 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:02.748543 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:33:02.748551 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:33:02.748559 | orchestrator | 2025-02-10 09:33:02.748567 | orchestrator | TASK [ceph-mgr : generate systemd ceph-mgr target file] ************************ 2025-02-10 09:33:02.748575 | orchestrator | Monday 10 February 2025 09:25:54 +0000 (0:00:01.367) 0:07:26.405 ******* 2025-02-10 09:33:02.748583 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:02.748590 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:33:02.748598 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:33:02.748606 | orchestrator | 2025-02-10 09:33:02.748614 | orchestrator | TASK [ceph-mgr : enable ceph-mgr.target] *************************************** 2025-02-10 09:33:02.748622 | orchestrator | Monday 10 February 2025 09:25:55 +0000 (0:00:01.286) 0:07:27.691 ******* 2025-02-10 09:33:02.748630 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:02.748637 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:33:02.748645 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:33:02.748653 | orchestrator | 2025-02-10 09:33:02.748661 | orchestrator | TASK [ceph-mgr : systemd start mgr] ******************************************** 2025-02-10 09:33:02.748669 | orchestrator | Monday 10 February 2025 09:25:58 +0000 (0:00:02.338) 0:07:30.030 ******* 2025-02-10 09:33:02.748677 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:02.748685 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:33:02.748692 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:33:02.748700 | orchestrator | 2025-02-10 09:33:02.748708 | orchestrator | TASK [ceph-mgr : include mgr_modules.yml] ************************************** 2025-02-10 09:33:02.748716 | orchestrator | Monday 10 February 2025 09:26:00 +0000 (0:00:02.267) 0:07:32.297 ******* 2025-02-10 09:33:02.748729 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.748737 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.748749 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-02-10 09:33:02.748757 | orchestrator | 2025-02-10 09:33:02.748765 | orchestrator | TASK [ceph-mgr : wait for all mgr to be up] ************************************ 2025-02-10 09:33:02.748773 | orchestrator | Monday 10 February 2025 09:26:01 +0000 (0:00:01.044) 0:07:33.342 ******* 2025-02-10 09:33:02.748781 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (30 retries left). 2025-02-10 09:33:02.748789 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (29 retries left). 2025-02-10 09:33:02.748796 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-02-10 09:33:02.748804 | orchestrator | 2025-02-10 09:33:02.748812 | orchestrator | TASK [ceph-mgr : get enabled modules from ceph-mgr] **************************** 2025-02-10 09:33:02.748820 | orchestrator | Monday 10 February 2025 09:26:15 +0000 (0:00:13.669) 0:07:47.011 ******* 2025-02-10 09:33:02.748828 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-02-10 09:33:02.748836 | orchestrator | 2025-02-10 09:33:02.748844 | orchestrator | TASK [ceph-mgr : set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-02-10 09:33:02.748852 | orchestrator | Monday 10 February 2025 09:26:16 +0000 (0:00:01.693) 0:07:48.705 ******* 2025-02-10 09:33:02.748860 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.748868 | orchestrator | 2025-02-10 09:33:02.748876 | orchestrator | TASK [ceph-mgr : set _disabled_ceph_mgr_modules fact] ************************** 2025-02-10 09:33:02.748883 | orchestrator | Monday 10 February 2025 09:26:17 +0000 (0:00:00.467) 0:07:49.172 ******* 2025-02-10 09:33:02.748891 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.748899 | orchestrator | 2025-02-10 09:33:02.748907 | orchestrator | TASK [ceph-mgr : disable ceph mgr enabled modules] ***************************** 2025-02-10 09:33:02.748914 | orchestrator | Monday 10 February 2025 09:26:18 +0000 (0:00:00.668) 0:07:49.841 ******* 2025-02-10 09:33:02.748922 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-02-10 09:33:02.748946 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-02-10 09:33:02.748956 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-02-10 09:33:02.748963 | orchestrator | 2025-02-10 09:33:02.748971 | orchestrator | TASK [ceph-mgr : add modules to ceph-mgr] ************************************** 2025-02-10 09:33:02.748979 | orchestrator | Monday 10 February 2025 09:26:24 +0000 (0:00:06.332) 0:07:56.173 ******* 2025-02-10 09:33:02.748987 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-02-10 09:33:02.748995 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-02-10 09:33:02.749024 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-02-10 09:33:02.749033 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-02-10 09:33:02.749042 | orchestrator | 2025-02-10 09:33:02.749050 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-02-10 09:33:02.749057 | orchestrator | Monday 10 February 2025 09:26:29 +0000 (0:00:05.530) 0:08:01.704 ******* 2025-02-10 09:33:02.749065 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:02.749073 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:33:02.749081 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:33:02.749089 | orchestrator | 2025-02-10 09:33:02.749097 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-02-10 09:33:02.749105 | orchestrator | Monday 10 February 2025 09:26:30 +0000 (0:00:01.119) 0:08:02.824 ******* 2025-02-10 09:33:02.749113 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:33:02.749121 | orchestrator | 2025-02-10 09:33:02.749129 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-02-10 09:33:02.749145 | orchestrator | Monday 10 February 2025 09:26:31 +0000 (0:00:00.670) 0:08:03.494 ******* 2025-02-10 09:33:02.749153 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.749165 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.749173 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.749181 | orchestrator | 2025-02-10 09:33:02.749189 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-02-10 09:33:02.749197 | orchestrator | Monday 10 February 2025 09:26:32 +0000 (0:00:00.392) 0:08:03.887 ******* 2025-02-10 09:33:02.749205 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:02.749213 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:33:02.749220 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:33:02.749228 | orchestrator | 2025-02-10 09:33:02.749236 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-02-10 09:33:02.749244 | orchestrator | Monday 10 February 2025 09:26:33 +0000 (0:00:01.846) 0:08:05.734 ******* 2025-02-10 09:33:02.749269 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-10 09:33:02.749278 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-10 09:33:02.749286 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-10 09:33:02.749293 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.749301 | orchestrator | 2025-02-10 09:33:02.749309 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-02-10 09:33:02.749317 | orchestrator | Monday 10 February 2025 09:26:34 +0000 (0:00:00.759) 0:08:06.494 ******* 2025-02-10 09:33:02.749325 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.749333 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.749341 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.749349 | orchestrator | 2025-02-10 09:33:02.749357 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-02-10 09:33:02.749365 | orchestrator | Monday 10 February 2025 09:26:35 +0000 (0:00:00.387) 0:08:06.881 ******* 2025-02-10 09:33:02.749373 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:02.749381 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:33:02.749389 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:33:02.749397 | orchestrator | 2025-02-10 09:33:02.749408 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-02-10 09:33:02.749416 | orchestrator | 2025-02-10 09:33:02.749424 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-02-10 09:33:02.749432 | orchestrator | Monday 10 February 2025 09:26:37 +0000 (0:00:02.531) 0:08:09.413 ******* 2025-02-10 09:33:02.749440 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:33:02.749448 | orchestrator | 2025-02-10 09:33:02.749456 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-02-10 09:33:02.749464 | orchestrator | Monday 10 February 2025 09:26:38 +0000 (0:00:00.639) 0:08:10.053 ******* 2025-02-10 09:33:02.749472 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.749480 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.749488 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.749496 | orchestrator | 2025-02-10 09:33:02.749504 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-02-10 09:33:02.749512 | orchestrator | Monday 10 February 2025 09:26:38 +0000 (0:00:00.311) 0:08:10.364 ******* 2025-02-10 09:33:02.749519 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.749527 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.749535 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.749543 | orchestrator | 2025-02-10 09:33:02.749551 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-02-10 09:33:02.749559 | orchestrator | Monday 10 February 2025 09:26:39 +0000 (0:00:01.100) 0:08:11.465 ******* 2025-02-10 09:33:02.749567 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.749578 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.749586 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.749599 | orchestrator | 2025-02-10 09:33:02.749607 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-02-10 09:33:02.749615 | orchestrator | Monday 10 February 2025 09:26:40 +0000 (0:00:00.855) 0:08:12.320 ******* 2025-02-10 09:33:02.749623 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.749631 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.749639 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.749647 | orchestrator | 2025-02-10 09:33:02.749662 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-02-10 09:33:02.749670 | orchestrator | Monday 10 February 2025 09:26:41 +0000 (0:00:00.811) 0:08:13.132 ******* 2025-02-10 09:33:02.749678 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.749686 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.749698 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.749706 | orchestrator | 2025-02-10 09:33:02.749714 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-02-10 09:33:02.749722 | orchestrator | Monday 10 February 2025 09:26:41 +0000 (0:00:00.367) 0:08:13.499 ******* 2025-02-10 09:33:02.749730 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.749738 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.749767 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.749776 | orchestrator | 2025-02-10 09:33:02.749793 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-02-10 09:33:02.749801 | orchestrator | Monday 10 February 2025 09:26:42 +0000 (0:00:00.645) 0:08:14.145 ******* 2025-02-10 09:33:02.749809 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.749817 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.749825 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.749833 | orchestrator | 2025-02-10 09:33:02.749841 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-02-10 09:33:02.749849 | orchestrator | Monday 10 February 2025 09:26:42 +0000 (0:00:00.340) 0:08:14.485 ******* 2025-02-10 09:33:02.749857 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.749865 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.749873 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.749881 | orchestrator | 2025-02-10 09:33:02.749889 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-02-10 09:33:02.749897 | orchestrator | Monday 10 February 2025 09:26:42 +0000 (0:00:00.341) 0:08:14.827 ******* 2025-02-10 09:33:02.749905 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.749913 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.749921 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.749929 | orchestrator | 2025-02-10 09:33:02.749955 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-02-10 09:33:02.749963 | orchestrator | Monday 10 February 2025 09:26:43 +0000 (0:00:00.412) 0:08:15.240 ******* 2025-02-10 09:33:02.749971 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.749979 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.749987 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.749994 | orchestrator | 2025-02-10 09:33:02.750002 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-02-10 09:33:02.750010 | orchestrator | Monday 10 February 2025 09:26:44 +0000 (0:00:00.635) 0:08:15.876 ******* 2025-02-10 09:33:02.750046 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.750055 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.750062 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.750070 | orchestrator | 2025-02-10 09:33:02.750078 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-02-10 09:33:02.750086 | orchestrator | Monday 10 February 2025 09:26:44 +0000 (0:00:00.743) 0:08:16.619 ******* 2025-02-10 09:33:02.750094 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.750102 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.750110 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.750117 | orchestrator | 2025-02-10 09:33:02.750125 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-02-10 09:33:02.750139 | orchestrator | Monday 10 February 2025 09:26:45 +0000 (0:00:00.367) 0:08:16.987 ******* 2025-02-10 09:33:02.750146 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.750154 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.750162 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.750170 | orchestrator | 2025-02-10 09:33:02.750178 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-02-10 09:33:02.750186 | orchestrator | Monday 10 February 2025 09:26:45 +0000 (0:00:00.332) 0:08:17.320 ******* 2025-02-10 09:33:02.750193 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.750201 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.750209 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.750217 | orchestrator | 2025-02-10 09:33:02.750228 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-02-10 09:33:02.750236 | orchestrator | Monday 10 February 2025 09:26:46 +0000 (0:00:00.676) 0:08:17.996 ******* 2025-02-10 09:33:02.750244 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.750252 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.750260 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.750268 | orchestrator | 2025-02-10 09:33:02.750276 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-02-10 09:33:02.750283 | orchestrator | Monday 10 February 2025 09:26:46 +0000 (0:00:00.364) 0:08:18.360 ******* 2025-02-10 09:33:02.750291 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.750299 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.750315 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.750323 | orchestrator | 2025-02-10 09:33:02.750340 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-02-10 09:33:02.750356 | orchestrator | Monday 10 February 2025 09:26:46 +0000 (0:00:00.341) 0:08:18.702 ******* 2025-02-10 09:33:02.750364 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.750372 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.750380 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.750388 | orchestrator | 2025-02-10 09:33:02.750396 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-02-10 09:33:02.750404 | orchestrator | Monday 10 February 2025 09:26:47 +0000 (0:00:00.371) 0:08:19.073 ******* 2025-02-10 09:33:02.750412 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.750419 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.750427 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.750435 | orchestrator | 2025-02-10 09:33:02.750443 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-02-10 09:33:02.750451 | orchestrator | Monday 10 February 2025 09:26:47 +0000 (0:00:00.684) 0:08:19.758 ******* 2025-02-10 09:33:02.750458 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.750466 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.750474 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.750482 | orchestrator | 2025-02-10 09:33:02.750497 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-02-10 09:33:02.750505 | orchestrator | Monday 10 February 2025 09:26:48 +0000 (0:00:00.409) 0:08:20.167 ******* 2025-02-10 09:33:02.750513 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.750521 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.750529 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.750537 | orchestrator | 2025-02-10 09:33:02.750545 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-02-10 09:33:02.750553 | orchestrator | Monday 10 February 2025 09:26:48 +0000 (0:00:00.363) 0:08:20.531 ******* 2025-02-10 09:33:02.750561 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.750569 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.750600 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.750609 | orchestrator | 2025-02-10 09:33:02.750617 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-02-10 09:33:02.750625 | orchestrator | Monday 10 February 2025 09:26:49 +0000 (0:00:00.453) 0:08:20.985 ******* 2025-02-10 09:33:02.750638 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.750646 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.750654 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.750666 | orchestrator | 2025-02-10 09:33:02.750675 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-02-10 09:33:02.750683 | orchestrator | Monday 10 February 2025 09:26:49 +0000 (0:00:00.718) 0:08:21.704 ******* 2025-02-10 09:33:02.750691 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.750699 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.750707 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.750715 | orchestrator | 2025-02-10 09:33:02.750723 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-02-10 09:33:02.750730 | orchestrator | Monday 10 February 2025 09:26:50 +0000 (0:00:00.370) 0:08:22.075 ******* 2025-02-10 09:33:02.750738 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.750746 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.750754 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.750762 | orchestrator | 2025-02-10 09:33:02.750770 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-02-10 09:33:02.750778 | orchestrator | Monday 10 February 2025 09:26:50 +0000 (0:00:00.379) 0:08:22.454 ******* 2025-02-10 09:33:02.750785 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.750793 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.750801 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.750809 | orchestrator | 2025-02-10 09:33:02.750817 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-02-10 09:33:02.750825 | orchestrator | Monday 10 February 2025 09:26:51 +0000 (0:00:00.449) 0:08:22.904 ******* 2025-02-10 09:33:02.750833 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.750840 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.750848 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.750856 | orchestrator | 2025-02-10 09:33:02.750864 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-02-10 09:33:02.750872 | orchestrator | Monday 10 February 2025 09:26:51 +0000 (0:00:00.828) 0:08:23.733 ******* 2025-02-10 09:33:02.750880 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.750887 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.750895 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.750903 | orchestrator | 2025-02-10 09:33:02.750911 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-02-10 09:33:02.750920 | orchestrator | Monday 10 February 2025 09:26:52 +0000 (0:00:00.424) 0:08:24.157 ******* 2025-02-10 09:33:02.750928 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.750973 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.750981 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.750989 | orchestrator | 2025-02-10 09:33:02.750997 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-02-10 09:33:02.751005 | orchestrator | Monday 10 February 2025 09:26:52 +0000 (0:00:00.425) 0:08:24.583 ******* 2025-02-10 09:33:02.751013 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.751021 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.751029 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.751037 | orchestrator | 2025-02-10 09:33:02.751045 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-02-10 09:33:02.751053 | orchestrator | Monday 10 February 2025 09:26:53 +0000 (0:00:00.469) 0:08:25.053 ******* 2025-02-10 09:33:02.751061 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.751069 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.751077 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.751085 | orchestrator | 2025-02-10 09:33:02.751102 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-02-10 09:33:02.751128 | orchestrator | Monday 10 February 2025 09:26:54 +0000 (0:00:00.875) 0:08:25.929 ******* 2025-02-10 09:33:02.751137 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.751144 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.751151 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.751158 | orchestrator | 2025-02-10 09:33:02.751165 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-02-10 09:33:02.751172 | orchestrator | Monday 10 February 2025 09:26:54 +0000 (0:00:00.444) 0:08:26.374 ******* 2025-02-10 09:33:02.751178 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.751185 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.751192 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.751199 | orchestrator | 2025-02-10 09:33:02.751210 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-02-10 09:33:02.751217 | orchestrator | Monday 10 February 2025 09:26:54 +0000 (0:00:00.368) 0:08:26.742 ******* 2025-02-10 09:33:02.751224 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-10 09:33:02.751231 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-10 09:33:02.751238 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.751245 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-10 09:33:02.751252 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-10 09:33:02.751259 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.751266 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-10 09:33:02.751273 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-10 09:33:02.751280 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.751287 | orchestrator | 2025-02-10 09:33:02.751294 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-02-10 09:33:02.751301 | orchestrator | Monday 10 February 2025 09:26:55 +0000 (0:00:00.497) 0:08:27.240 ******* 2025-02-10 09:33:02.751315 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-02-10 09:33:02.751322 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-02-10 09:33:02.751329 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.751357 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-02-10 09:33:02.751365 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-02-10 09:33:02.751372 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.751379 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-02-10 09:33:02.751386 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-02-10 09:33:02.751393 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.751400 | orchestrator | 2025-02-10 09:33:02.751407 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-02-10 09:33:02.751414 | orchestrator | Monday 10 February 2025 09:26:56 +0000 (0:00:00.810) 0:08:28.050 ******* 2025-02-10 09:33:02.751420 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.751427 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.751434 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.751441 | orchestrator | 2025-02-10 09:33:02.751448 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-02-10 09:33:02.751455 | orchestrator | Monday 10 February 2025 09:26:56 +0000 (0:00:00.369) 0:08:28.420 ******* 2025-02-10 09:33:02.751462 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.751469 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.751476 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.751482 | orchestrator | 2025-02-10 09:33:02.751489 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-10 09:33:02.751496 | orchestrator | Monday 10 February 2025 09:26:56 +0000 (0:00:00.349) 0:08:28.770 ******* 2025-02-10 09:33:02.751503 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.751510 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.751532 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.751539 | orchestrator | 2025-02-10 09:33:02.751546 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-10 09:33:02.751553 | orchestrator | Monday 10 February 2025 09:26:57 +0000 (0:00:00.410) 0:08:29.181 ******* 2025-02-10 09:33:02.751560 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.751567 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.751574 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.751581 | orchestrator | 2025-02-10 09:33:02.751588 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-10 09:33:02.751595 | orchestrator | Monday 10 February 2025 09:26:58 +0000 (0:00:00.707) 0:08:29.888 ******* 2025-02-10 09:33:02.751601 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.751608 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.751619 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.751626 | orchestrator | 2025-02-10 09:33:02.751633 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-10 09:33:02.751640 | orchestrator | Monday 10 February 2025 09:26:58 +0000 (0:00:00.367) 0:08:30.256 ******* 2025-02-10 09:33:02.751653 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.751661 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.751667 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.751674 | orchestrator | 2025-02-10 09:33:02.751681 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-10 09:33:02.751688 | orchestrator | Monday 10 February 2025 09:26:58 +0000 (0:00:00.373) 0:08:30.629 ******* 2025-02-10 09:33:02.751695 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:33:02.751702 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:33:02.751709 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:33:02.751716 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.751723 | orchestrator | 2025-02-10 09:33:02.751730 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-10 09:33:02.751737 | orchestrator | Monday 10 February 2025 09:26:59 +0000 (0:00:00.485) 0:08:31.115 ******* 2025-02-10 09:33:02.751744 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:33:02.751751 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:33:02.751757 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:33:02.751764 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.751771 | orchestrator | 2025-02-10 09:33:02.751778 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-10 09:33:02.751785 | orchestrator | Monday 10 February 2025 09:27:00 +0000 (0:00:00.789) 0:08:31.905 ******* 2025-02-10 09:33:02.751792 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:33:02.751799 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:33:02.751806 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:33:02.751813 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.751820 | orchestrator | 2025-02-10 09:33:02.751827 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:33:02.751834 | orchestrator | Monday 10 February 2025 09:27:01 +0000 (0:00:01.070) 0:08:32.976 ******* 2025-02-10 09:33:02.751841 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.751848 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.751854 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.751861 | orchestrator | 2025-02-10 09:33:02.751868 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-10 09:33:02.751875 | orchestrator | Monday 10 February 2025 09:27:01 +0000 (0:00:00.420) 0:08:33.396 ******* 2025-02-10 09:33:02.751882 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-10 09:33:02.751889 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.751896 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-10 09:33:02.751908 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.751915 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-10 09:33:02.751922 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.751929 | orchestrator | 2025-02-10 09:33:02.751950 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-10 09:33:02.751957 | orchestrator | Monday 10 February 2025 09:27:02 +0000 (0:00:00.527) 0:08:33.924 ******* 2025-02-10 09:33:02.751982 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.751990 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.751997 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.752004 | orchestrator | 2025-02-10 09:33:02.752011 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:33:02.752018 | orchestrator | Monday 10 February 2025 09:27:02 +0000 (0:00:00.351) 0:08:34.275 ******* 2025-02-10 09:33:02.752024 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.752031 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.752038 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.752045 | orchestrator | 2025-02-10 09:33:02.752052 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-10 09:33:02.752059 | orchestrator | Monday 10 February 2025 09:27:03 +0000 (0:00:00.685) 0:08:34.961 ******* 2025-02-10 09:33:02.752066 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-10 09:33:02.752073 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.752079 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-10 09:33:02.752086 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.752093 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-10 09:33:02.752100 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.752107 | orchestrator | 2025-02-10 09:33:02.752114 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-10 09:33:02.752126 | orchestrator | Monday 10 February 2025 09:27:03 +0000 (0:00:00.560) 0:08:35.521 ******* 2025-02-10 09:33:02.752137 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-02-10 09:33:02.752144 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.752151 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-02-10 09:33:02.752158 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.752165 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-02-10 09:33:02.752172 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.752179 | orchestrator | 2025-02-10 09:33:02.752185 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-10 09:33:02.752192 | orchestrator | Monday 10 February 2025 09:27:04 +0000 (0:00:00.395) 0:08:35.916 ******* 2025-02-10 09:33:02.752199 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:33:02.752206 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:33:02.752213 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:33:02.752220 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-02-10 09:33:02.752226 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-02-10 09:33:02.752233 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-02-10 09:33:02.752240 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.752247 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.752254 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-02-10 09:33:02.752261 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-02-10 09:33:02.752268 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-02-10 09:33:02.752274 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.752286 | orchestrator | 2025-02-10 09:33:02.752293 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-02-10 09:33:02.752300 | orchestrator | Monday 10 February 2025 09:27:05 +0000 (0:00:00.937) 0:08:36.854 ******* 2025-02-10 09:33:02.752307 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.752316 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.752327 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.752338 | orchestrator | 2025-02-10 09:33:02.752349 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-02-10 09:33:02.752359 | orchestrator | Monday 10 February 2025 09:27:05 +0000 (0:00:00.652) 0:08:37.506 ******* 2025-02-10 09:33:02.752371 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-10 09:33:02.752378 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-02-10 09:33:02.752385 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.752392 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.752399 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-02-10 09:33:02.752406 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.752413 | orchestrator | 2025-02-10 09:33:02.752420 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-02-10 09:33:02.752427 | orchestrator | Monday 10 February 2025 09:27:06 +0000 (0:00:00.920) 0:08:38.427 ******* 2025-02-10 09:33:02.752433 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.752440 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.752447 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.752454 | orchestrator | 2025-02-10 09:33:02.752461 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-02-10 09:33:02.752468 | orchestrator | Monday 10 February 2025 09:27:07 +0000 (0:00:00.671) 0:08:39.098 ******* 2025-02-10 09:33:02.752475 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.752481 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.752488 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.752495 | orchestrator | 2025-02-10 09:33:02.752502 | orchestrator | TASK [ceph-osd : set_fact add_osd] ********************************************* 2025-02-10 09:33:02.752509 | orchestrator | Monday 10 February 2025 09:27:08 +0000 (0:00:00.911) 0:08:40.010 ******* 2025-02-10 09:33:02.752516 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.752523 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.752529 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.752536 | orchestrator | 2025-02-10 09:33:02.752543 | orchestrator | TASK [ceph-osd : set_fact container_exec_cmd] ********************************** 2025-02-10 09:33:02.752550 | orchestrator | Monday 10 February 2025 09:27:08 +0000 (0:00:00.393) 0:08:40.403 ******* 2025-02-10 09:33:02.752576 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-02-10 09:33:02.752584 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-10 09:33:02.752591 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-10 09:33:02.752598 | orchestrator | 2025-02-10 09:33:02.752605 | orchestrator | TASK [ceph-osd : include_tasks system_tuning.yml] ****************************** 2025-02-10 09:33:02.752612 | orchestrator | Monday 10 February 2025 09:27:09 +0000 (0:00:00.877) 0:08:41.281 ******* 2025-02-10 09:33:02.752619 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:33:02.752626 | orchestrator | 2025-02-10 09:33:02.752633 | orchestrator | TASK [ceph-osd : disable osd directory parsing by updatedb] ******************** 2025-02-10 09:33:02.752639 | orchestrator | Monday 10 February 2025 09:27:10 +0000 (0:00:00.655) 0:08:41.936 ******* 2025-02-10 09:33:02.752646 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.752653 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.752660 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.752667 | orchestrator | 2025-02-10 09:33:02.752674 | orchestrator | TASK [ceph-osd : disable osd directory path in updatedb.conf] ****************** 2025-02-10 09:33:02.752686 | orchestrator | Monday 10 February 2025 09:27:10 +0000 (0:00:00.829) 0:08:42.766 ******* 2025-02-10 09:33:02.752693 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.752700 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.752707 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.752714 | orchestrator | 2025-02-10 09:33:02.752721 | orchestrator | TASK [ceph-osd : create tmpfiles.d directory] ********************************** 2025-02-10 09:33:02.752728 | orchestrator | Monday 10 February 2025 09:27:11 +0000 (0:00:00.532) 0:08:43.299 ******* 2025-02-10 09:33:02.752735 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.752741 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.752754 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.752761 | orchestrator | 2025-02-10 09:33:02.752768 | orchestrator | TASK [ceph-osd : disable transparent hugepage] ********************************* 2025-02-10 09:33:02.752775 | orchestrator | Monday 10 February 2025 09:27:12 +0000 (0:00:00.603) 0:08:43.902 ******* 2025-02-10 09:33:02.752782 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.752788 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.752795 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.752802 | orchestrator | 2025-02-10 09:33:02.752809 | orchestrator | TASK [ceph-osd : get default vm.min_free_kbytes] ******************************* 2025-02-10 09:33:02.752816 | orchestrator | Monday 10 February 2025 09:27:12 +0000 (0:00:00.459) 0:08:44.362 ******* 2025-02-10 09:33:02.752823 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.752830 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.752836 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.752843 | orchestrator | 2025-02-10 09:33:02.752850 | orchestrator | TASK [ceph-osd : set_fact vm_min_free_kbytes] ********************************** 2025-02-10 09:33:02.752857 | orchestrator | Monday 10 February 2025 09:27:13 +0000 (0:00:01.066) 0:08:45.428 ******* 2025-02-10 09:33:02.752864 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.752871 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.752878 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.752884 | orchestrator | 2025-02-10 09:33:02.752895 | orchestrator | TASK [ceph-osd : apply operating system tuning] ******************************** 2025-02-10 09:33:02.752903 | orchestrator | Monday 10 February 2025 09:27:13 +0000 (0:00:00.388) 0:08:45.817 ******* 2025-02-10 09:33:02.752920 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-02-10 09:33:02.752927 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-02-10 09:33:02.752968 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-02-10 09:33:02.752976 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-02-10 09:33:02.752983 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-02-10 09:33:02.752990 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-02-10 09:33:02.752997 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-02-10 09:33:02.753004 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-02-10 09:33:02.753012 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-02-10 09:33:02.753018 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-02-10 09:33:02.753025 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-02-10 09:33:02.753032 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-02-10 09:33:02.753040 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-02-10 09:33:02.753047 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-02-10 09:33:02.753053 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-02-10 09:33:02.753065 | orchestrator | 2025-02-10 09:33:02.753072 | orchestrator | TASK [ceph-osd : install dependencies] ***************************************** 2025-02-10 09:33:02.753079 | orchestrator | Monday 10 February 2025 09:27:18 +0000 (0:00:04.308) 0:08:50.125 ******* 2025-02-10 09:33:02.753086 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.753093 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.753100 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.753107 | orchestrator | 2025-02-10 09:33:02.753114 | orchestrator | TASK [ceph-osd : include_tasks common.yml] ************************************* 2025-02-10 09:33:02.753141 | orchestrator | Monday 10 February 2025 09:27:18 +0000 (0:00:00.592) 0:08:50.719 ******* 2025-02-10 09:33:02.753149 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:33:02.753156 | orchestrator | 2025-02-10 09:33:02.753163 | orchestrator | TASK [ceph-osd : create bootstrap-osd and osd directories] ********************* 2025-02-10 09:33:02.753170 | orchestrator | Monday 10 February 2025 09:27:19 +0000 (0:00:00.652) 0:08:51.371 ******* 2025-02-10 09:33:02.753177 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-02-10 09:33:02.753184 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-02-10 09:33:02.753191 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-02-10 09:33:02.753198 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-02-10 09:33:02.753205 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-02-10 09:33:02.753211 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-02-10 09:33:02.753218 | orchestrator | 2025-02-10 09:33:02.753225 | orchestrator | TASK [ceph-osd : get keys from monitors] *************************************** 2025-02-10 09:33:02.753232 | orchestrator | Monday 10 February 2025 09:27:20 +0000 (0:00:01.395) 0:08:52.767 ******* 2025-02-10 09:33:02.753239 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:33:02.753246 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-10 09:33:02.753253 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-02-10 09:33:02.753260 | orchestrator | 2025-02-10 09:33:02.753267 | orchestrator | TASK [ceph-osd : copy ceph key(s) if needed] *********************************** 2025-02-10 09:33:02.753274 | orchestrator | Monday 10 February 2025 09:27:22 +0000 (0:00:01.905) 0:08:54.672 ******* 2025-02-10 09:33:02.753281 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-02-10 09:33:02.753297 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-10 09:33:02.753304 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:33:02.753311 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-02-10 09:33:02.753318 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-02-10 09:33:02.753325 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:33:02.753332 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-02-10 09:33:02.753339 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-02-10 09:33:02.753346 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:33:02.753353 | orchestrator | 2025-02-10 09:33:02.753360 | orchestrator | TASK [ceph-osd : set noup flag] ************************************************ 2025-02-10 09:33:02.753367 | orchestrator | Monday 10 February 2025 09:27:24 +0000 (0:00:01.341) 0:08:56.014 ******* 2025-02-10 09:33:02.753374 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-02-10 09:33:02.753380 | orchestrator | 2025-02-10 09:33:02.753388 | orchestrator | TASK [ceph-osd : include container_options_facts.yml] ************************** 2025-02-10 09:33:02.753394 | orchestrator | Monday 10 February 2025 09:27:26 +0000 (0:00:02.534) 0:08:58.548 ******* 2025-02-10 09:33:02.753401 | orchestrator | included: /ansible/roles/ceph-osd/tasks/container_options_facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:33:02.753408 | orchestrator | 2025-02-10 09:33:02.753422 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=0'] *** 2025-02-10 09:33:02.753434 | orchestrator | Monday 10 February 2025 09:27:27 +0000 (0:00:00.815) 0:08:59.364 ******* 2025-02-10 09:33:02.753441 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.753448 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.753455 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.753462 | orchestrator | 2025-02-10 09:33:02.753469 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=1'] *** 2025-02-10 09:33:02.753476 | orchestrator | Monday 10 February 2025 09:27:27 +0000 (0:00:00.353) 0:08:59.717 ******* 2025-02-10 09:33:02.753483 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.753490 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.753497 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.753503 | orchestrator | 2025-02-10 09:33:02.753511 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=0'] *** 2025-02-10 09:33:02.753518 | orchestrator | Monday 10 February 2025 09:27:28 +0000 (0:00:00.385) 0:09:00.102 ******* 2025-02-10 09:33:02.753524 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.753530 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.753536 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.753543 | orchestrator | 2025-02-10 09:33:02.753549 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=1'] *** 2025-02-10 09:33:02.753555 | orchestrator | Monday 10 February 2025 09:27:28 +0000 (0:00:00.630) 0:09:00.733 ******* 2025-02-10 09:33:02.753561 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.753567 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.753574 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.753580 | orchestrator | 2025-02-10 09:33:02.753586 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm.yml] ****************************** 2025-02-10 09:33:02.753595 | orchestrator | Monday 10 February 2025 09:27:29 +0000 (0:00:00.416) 0:09:01.150 ******* 2025-02-10 09:33:02.753602 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:33:02.753608 | orchestrator | 2025-02-10 09:33:02.753614 | orchestrator | TASK [ceph-osd : use ceph-volume to create bluestore osds] ********************* 2025-02-10 09:33:02.753620 | orchestrator | Monday 10 February 2025 09:27:29 +0000 (0:00:00.599) 0:09:01.749 ******* 2025-02-10 09:33:02.753627 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-70e6c2b1-f69e-5685-9251-bc72a13d87ec', 'data_vg': 'ceph-70e6c2b1-f69e-5685-9251-bc72a13d87ec'}) 2025-02-10 09:33:02.753649 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-89c58721-f175-5d0e-8750-3436c1d71ced', 'data_vg': 'ceph-89c58721-f175-5d0e-8750-3436c1d71ced'}) 2025-02-10 09:33:02.753684 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-5101bad7-da03-58be-8044-cbe4500fcec9', 'data_vg': 'ceph-5101bad7-da03-58be-8044-cbe4500fcec9'}) 2025-02-10 09:33:02.753692 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f3b4a615-299b-50bf-af8e-26b6dc38e729', 'data_vg': 'ceph-f3b4a615-299b-50bf-af8e-26b6dc38e729'}) 2025-02-10 09:33:02.753698 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-989340a3-ac62-57b3-a342-92d58018bc1c', 'data_vg': 'ceph-989340a3-ac62-57b3-a342-92d58018bc1c'}) 2025-02-10 09:33:02.753704 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-d59ecc87-3940-56cd-881a-fbc914ec02de', 'data_vg': 'ceph-d59ecc87-3940-56cd-881a-fbc914ec02de'}) 2025-02-10 09:33:02.753710 | orchestrator | 2025-02-10 09:33:02.753717 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm-batch.yml] ************************ 2025-02-10 09:33:02.753723 | orchestrator | Monday 10 February 2025 09:28:10 +0000 (0:00:40.172) 0:09:41.921 ******* 2025-02-10 09:33:02.753729 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.753735 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.753741 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.753747 | orchestrator | 2025-02-10 09:33:02.753754 | orchestrator | TASK [ceph-osd : include_tasks start_osds.yml] ********************************* 2025-02-10 09:33:02.753764 | orchestrator | Monday 10 February 2025 09:28:10 +0000 (0:00:00.571) 0:09:42.493 ******* 2025-02-10 09:33:02.753770 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:33:02.753776 | orchestrator | 2025-02-10 09:33:02.753783 | orchestrator | TASK [ceph-osd : get osd ids] ************************************************** 2025-02-10 09:33:02.753789 | orchestrator | Monday 10 February 2025 09:28:11 +0000 (0:00:00.726) 0:09:43.220 ******* 2025-02-10 09:33:02.753795 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.753801 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.753807 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.753816 | orchestrator | 2025-02-10 09:33:02.753823 | orchestrator | TASK [ceph-osd : collect osd ids] ********************************************** 2025-02-10 09:33:02.753829 | orchestrator | Monday 10 February 2025 09:28:12 +0000 (0:00:00.788) 0:09:44.009 ******* 2025-02-10 09:33:02.753835 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:33:02.753842 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:33:02.753848 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:33:02.753854 | orchestrator | 2025-02-10 09:33:02.753860 | orchestrator | TASK [ceph-osd : include_tasks systemd.yml] ************************************ 2025-02-10 09:33:02.753866 | orchestrator | Monday 10 February 2025 09:28:14 +0000 (0:00:02.160) 0:09:46.169 ******* 2025-02-10 09:33:02.753872 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:33:02.753878 | orchestrator | 2025-02-10 09:33:02.753884 | orchestrator | TASK [ceph-osd : generate systemd unit file] *********************************** 2025-02-10 09:33:02.753891 | orchestrator | Monday 10 February 2025 09:28:14 +0000 (0:00:00.641) 0:09:46.811 ******* 2025-02-10 09:33:02.753897 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:33:02.753903 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:33:02.753909 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:33:02.753915 | orchestrator | 2025-02-10 09:33:02.753921 | orchestrator | TASK [ceph-osd : generate systemd ceph-osd target file] ************************ 2025-02-10 09:33:02.753927 | orchestrator | Monday 10 February 2025 09:28:16 +0000 (0:00:01.568) 0:09:48.380 ******* 2025-02-10 09:33:02.753948 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:33:02.753955 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:33:02.753961 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:33:02.753967 | orchestrator | 2025-02-10 09:33:02.753973 | orchestrator | TASK [ceph-osd : enable ceph-osd.target] *************************************** 2025-02-10 09:33:02.753980 | orchestrator | Monday 10 February 2025 09:28:17 +0000 (0:00:01.241) 0:09:49.622 ******* 2025-02-10 09:33:02.753986 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:33:02.753992 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:33:02.753998 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:33:02.754004 | orchestrator | 2025-02-10 09:33:02.754010 | orchestrator | TASK [ceph-osd : ensure systemd service override directory exists] ************* 2025-02-10 09:33:02.754033 | orchestrator | Monday 10 February 2025 09:28:19 +0000 (0:00:01.859) 0:09:51.481 ******* 2025-02-10 09:33:02.754040 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.754046 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.754052 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.754058 | orchestrator | 2025-02-10 09:33:02.754064 | orchestrator | TASK [ceph-osd : add ceph-osd systemd service overrides] *********************** 2025-02-10 09:33:02.754070 | orchestrator | Monday 10 February 2025 09:28:20 +0000 (0:00:00.367) 0:09:51.849 ******* 2025-02-10 09:33:02.754076 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.754082 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.754092 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.754098 | orchestrator | 2025-02-10 09:33:02.754104 | orchestrator | TASK [ceph-osd : ensure "/var/lib/ceph/osd/{{ cluster }}-{{ item }}" is present] *** 2025-02-10 09:33:02.754110 | orchestrator | Monday 10 February 2025 09:28:20 +0000 (0:00:00.713) 0:09:52.563 ******* 2025-02-10 09:33:02.754120 | orchestrator | ok: [testbed-node-3] => (item=1) 2025-02-10 09:33:02.754126 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-02-10 09:33:02.754132 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-02-10 09:33:02.754138 | orchestrator | ok: [testbed-node-3] => (item=3) 2025-02-10 09:33:02.754145 | orchestrator | ok: [testbed-node-4] => (item=5) 2025-02-10 09:33:02.754151 | orchestrator | ok: [testbed-node-5] => (item=4) 2025-02-10 09:33:02.754157 | orchestrator | 2025-02-10 09:33:02.754163 | orchestrator | TASK [ceph-osd : systemd start osd] ******************************************** 2025-02-10 09:33:02.754169 | orchestrator | Monday 10 February 2025 09:28:22 +0000 (0:00:01.284) 0:09:53.847 ******* 2025-02-10 09:33:02.754193 | orchestrator | changed: [testbed-node-3] => (item=1) 2025-02-10 09:33:02.754200 | orchestrator | changed: [testbed-node-4] => (item=0) 2025-02-10 09:33:02.754206 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-02-10 09:33:02.754212 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-02-10 09:33:02.754218 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-02-10 09:33:02.754224 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-02-10 09:33:02.754230 | orchestrator | 2025-02-10 09:33:02.754237 | orchestrator | TASK [ceph-osd : unset noup flag] ********************************************** 2025-02-10 09:33:02.754243 | orchestrator | Monday 10 February 2025 09:28:26 +0000 (0:00:04.035) 0:09:57.882 ******* 2025-02-10 09:33:02.754249 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.754255 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.754261 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-02-10 09:33:02.754267 | orchestrator | 2025-02-10 09:33:02.754274 | orchestrator | TASK [ceph-osd : wait for all osd to be up] ************************************ 2025-02-10 09:33:02.754280 | orchestrator | Monday 10 February 2025 09:28:29 +0000 (0:00:03.166) 0:10:01.049 ******* 2025-02-10 09:33:02.754286 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.754292 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.754298 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: wait for all osd to be up (60 retries left). 2025-02-10 09:33:02.754305 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-02-10 09:33:02.754311 | orchestrator | 2025-02-10 09:33:02.754317 | orchestrator | TASK [ceph-osd : include crush_rules.yml] ************************************** 2025-02-10 09:33:02.754326 | orchestrator | Monday 10 February 2025 09:28:41 +0000 (0:00:12.614) 0:10:13.663 ******* 2025-02-10 09:33:02.754332 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.754344 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.754350 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.754356 | orchestrator | 2025-02-10 09:33:02.754362 | orchestrator | TASK [ceph-osd : include openstack_config.yml] ********************************* 2025-02-10 09:33:02.754368 | orchestrator | Monday 10 February 2025 09:28:42 +0000 (0:00:00.645) 0:10:14.309 ******* 2025-02-10 09:33:02.754409 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.754417 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.754424 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.754430 | orchestrator | 2025-02-10 09:33:02.754436 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-02-10 09:33:02.754442 | orchestrator | Monday 10 February 2025 09:28:43 +0000 (0:00:01.301) 0:10:15.610 ******* 2025-02-10 09:33:02.754448 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:33:02.754454 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:33:02.754460 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:33:02.754466 | orchestrator | 2025-02-10 09:33:02.754472 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-02-10 09:33:02.754478 | orchestrator | Monday 10 February 2025 09:28:44 +0000 (0:00:01.085) 0:10:16.696 ******* 2025-02-10 09:33:02.754484 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:33:02.754491 | orchestrator | 2025-02-10 09:33:02.754497 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact trigger_restart] ********************** 2025-02-10 09:33:02.754507 | orchestrator | Monday 10 February 2025 09:28:45 +0000 (0:00:00.708) 0:10:17.405 ******* 2025-02-10 09:33:02.754513 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:33:02.754520 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:33:02.754526 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:33:02.754532 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.754538 | orchestrator | 2025-02-10 09:33:02.754544 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called before restart] ******** 2025-02-10 09:33:02.754550 | orchestrator | Monday 10 February 2025 09:28:46 +0000 (0:00:00.450) 0:10:17.855 ******* 2025-02-10 09:33:02.754556 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.754562 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.754568 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.754574 | orchestrator | 2025-02-10 09:33:02.754580 | orchestrator | RUNNING HANDLER [ceph-handler : unset noup flag] ******************************* 2025-02-10 09:33:02.754587 | orchestrator | Monday 10 February 2025 09:28:46 +0000 (0:00:00.339) 0:10:18.195 ******* 2025-02-10 09:33:02.754593 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.754599 | orchestrator | 2025-02-10 09:33:02.754605 | orchestrator | RUNNING HANDLER [ceph-handler : copy osd restart script] *********************** 2025-02-10 09:33:02.754611 | orchestrator | Monday 10 February 2025 09:28:46 +0000 (0:00:00.268) 0:10:18.463 ******* 2025-02-10 09:33:02.754617 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.754623 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.754629 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.754635 | orchestrator | 2025-02-10 09:33:02.754641 | orchestrator | RUNNING HANDLER [ceph-handler : get pool list] ********************************* 2025-02-10 09:33:02.754648 | orchestrator | Monday 10 February 2025 09:28:47 +0000 (0:00:00.676) 0:10:19.139 ******* 2025-02-10 09:33:02.754654 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.754660 | orchestrator | 2025-02-10 09:33:02.754666 | orchestrator | RUNNING HANDLER [ceph-handler : get balancer module status] ******************** 2025-02-10 09:33:02.754672 | orchestrator | Monday 10 February 2025 09:28:47 +0000 (0:00:00.260) 0:10:19.400 ******* 2025-02-10 09:33:02.754678 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.754684 | orchestrator | 2025-02-10 09:33:02.754690 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-02-10 09:33:02.754696 | orchestrator | Monday 10 February 2025 09:28:47 +0000 (0:00:00.249) 0:10:19.649 ******* 2025-02-10 09:33:02.754702 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.754708 | orchestrator | 2025-02-10 09:33:02.754715 | orchestrator | RUNNING HANDLER [ceph-handler : disable balancer] ****************************** 2025-02-10 09:33:02.754721 | orchestrator | Monday 10 February 2025 09:28:47 +0000 (0:00:00.142) 0:10:19.791 ******* 2025-02-10 09:33:02.754727 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.754733 | orchestrator | 2025-02-10 09:33:02.754757 | orchestrator | RUNNING HANDLER [ceph-handler : disable pg autoscale on pools] ***************** 2025-02-10 09:33:02.754764 | orchestrator | Monday 10 February 2025 09:28:48 +0000 (0:00:00.261) 0:10:20.053 ******* 2025-02-10 09:33:02.754770 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.754776 | orchestrator | 2025-02-10 09:33:02.754782 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph osds daemon(s)] ******************* 2025-02-10 09:33:02.754788 | orchestrator | Monday 10 February 2025 09:28:48 +0000 (0:00:00.240) 0:10:20.294 ******* 2025-02-10 09:33:02.754794 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:33:02.754800 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:33:02.754807 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:33:02.754813 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.754819 | orchestrator | 2025-02-10 09:33:02.754825 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called after restart] ********* 2025-02-10 09:33:02.754831 | orchestrator | Monday 10 February 2025 09:28:48 +0000 (0:00:00.488) 0:10:20.782 ******* 2025-02-10 09:33:02.754842 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.754848 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.754854 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.754860 | orchestrator | 2025-02-10 09:33:02.754866 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable pg autoscale on pools] *************** 2025-02-10 09:33:02.754872 | orchestrator | Monday 10 February 2025 09:28:49 +0000 (0:00:00.607) 0:10:21.390 ******* 2025-02-10 09:33:02.754878 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.754885 | orchestrator | 2025-02-10 09:33:02.754891 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable balancer] **************************** 2025-02-10 09:33:02.754901 | orchestrator | Monday 10 February 2025 09:28:49 +0000 (0:00:00.253) 0:10:21.644 ******* 2025-02-10 09:33:02.754907 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.754913 | orchestrator | 2025-02-10 09:33:02.754919 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-02-10 09:33:02.754925 | orchestrator | Monday 10 February 2025 09:28:50 +0000 (0:00:00.309) 0:10:21.953 ******* 2025-02-10 09:33:02.754953 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:33:02.754964 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:33:02.754976 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:33:02.754987 | orchestrator | 2025-02-10 09:33:02.754997 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-02-10 09:33:02.755003 | orchestrator | 2025-02-10 09:33:02.755010 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-02-10 09:33:02.755016 | orchestrator | Monday 10 February 2025 09:28:53 +0000 (0:00:03.295) 0:10:25.249 ******* 2025-02-10 09:33:02.755022 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:33:02.755030 | orchestrator | 2025-02-10 09:33:02.755036 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-02-10 09:33:02.755043 | orchestrator | Monday 10 February 2025 09:28:54 +0000 (0:00:01.455) 0:10:26.705 ******* 2025-02-10 09:33:02.755049 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.755055 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.755061 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.755067 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.755074 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.755080 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.755086 | orchestrator | 2025-02-10 09:33:02.755092 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-02-10 09:33:02.755098 | orchestrator | Monday 10 February 2025 09:28:55 +0000 (0:00:00.781) 0:10:27.487 ******* 2025-02-10 09:33:02.755104 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.755110 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.755116 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.755123 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.755133 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.755140 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.755146 | orchestrator | 2025-02-10 09:33:02.755152 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-02-10 09:33:02.755166 | orchestrator | Monday 10 February 2025 09:28:57 +0000 (0:00:01.375) 0:10:28.862 ******* 2025-02-10 09:33:02.755172 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.755178 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.755184 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.755190 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.755196 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.755203 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.755209 | orchestrator | 2025-02-10 09:33:02.755215 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-02-10 09:33:02.755221 | orchestrator | Monday 10 February 2025 09:28:58 +0000 (0:00:01.439) 0:10:30.302 ******* 2025-02-10 09:33:02.755232 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.755238 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.755244 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.755250 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.755256 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.755262 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.755268 | orchestrator | 2025-02-10 09:33:02.755275 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-02-10 09:33:02.755281 | orchestrator | Monday 10 February 2025 09:28:59 +0000 (0:00:01.285) 0:10:31.588 ******* 2025-02-10 09:33:02.755287 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.755293 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.755299 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.755305 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.755311 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.755317 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.755323 | orchestrator | 2025-02-10 09:33:02.755330 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-02-10 09:33:02.755336 | orchestrator | Monday 10 February 2025 09:29:00 +0000 (0:00:01.168) 0:10:32.756 ******* 2025-02-10 09:33:02.755342 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.755348 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.755354 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.755377 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.755384 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.755391 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.755397 | orchestrator | 2025-02-10 09:33:02.755411 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-02-10 09:33:02.755423 | orchestrator | Monday 10 February 2025 09:29:01 +0000 (0:00:00.745) 0:10:33.501 ******* 2025-02-10 09:33:02.755429 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.755435 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.755441 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.755447 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.755453 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.755459 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.755465 | orchestrator | 2025-02-10 09:33:02.755472 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-02-10 09:33:02.755478 | orchestrator | Monday 10 February 2025 09:29:02 +0000 (0:00:00.976) 0:10:34.478 ******* 2025-02-10 09:33:02.755484 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.755491 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.755502 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.755513 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.755525 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.755532 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.755538 | orchestrator | 2025-02-10 09:33:02.755544 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-02-10 09:33:02.755551 | orchestrator | Monday 10 February 2025 09:29:03 +0000 (0:00:00.717) 0:10:35.195 ******* 2025-02-10 09:33:02.755557 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.755563 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.755569 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.755575 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.755581 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.755587 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.755593 | orchestrator | 2025-02-10 09:33:02.755599 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-02-10 09:33:02.755605 | orchestrator | Monday 10 February 2025 09:29:04 +0000 (0:00:00.983) 0:10:36.179 ******* 2025-02-10 09:33:02.755611 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.755617 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.755623 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.755630 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.755640 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.755646 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.755656 | orchestrator | 2025-02-10 09:33:02.755662 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-02-10 09:33:02.755668 | orchestrator | Monday 10 February 2025 09:29:05 +0000 (0:00:00.697) 0:10:36.877 ******* 2025-02-10 09:33:02.755675 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.755681 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.755687 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.755693 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.755699 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.755705 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.755711 | orchestrator | 2025-02-10 09:33:02.755717 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-02-10 09:33:02.755723 | orchestrator | Monday 10 February 2025 09:29:06 +0000 (0:00:01.895) 0:10:38.773 ******* 2025-02-10 09:33:02.755730 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.755736 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.755742 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.755748 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.755754 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.755760 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.755766 | orchestrator | 2025-02-10 09:33:02.755772 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-02-10 09:33:02.755778 | orchestrator | Monday 10 February 2025 09:29:08 +0000 (0:00:01.131) 0:10:39.904 ******* 2025-02-10 09:33:02.755785 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.755791 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.755797 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.755803 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.755809 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.755815 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.755821 | orchestrator | 2025-02-10 09:33:02.755830 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-02-10 09:33:02.755836 | orchestrator | Monday 10 February 2025 09:29:09 +0000 (0:00:01.607) 0:10:41.511 ******* 2025-02-10 09:33:02.755842 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.755849 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.755855 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.755861 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.755867 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.755873 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.755879 | orchestrator | 2025-02-10 09:33:02.755885 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-02-10 09:33:02.755891 | orchestrator | Monday 10 February 2025 09:29:10 +0000 (0:00:00.917) 0:10:42.429 ******* 2025-02-10 09:33:02.755897 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.755903 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.755909 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.755915 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.755922 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.755928 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.755949 | orchestrator | 2025-02-10 09:33:02.755956 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-02-10 09:33:02.755962 | orchestrator | Monday 10 February 2025 09:29:11 +0000 (0:00:01.049) 0:10:43.478 ******* 2025-02-10 09:33:02.755968 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.755974 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.755980 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.755986 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.755992 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.755999 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.756005 | orchestrator | 2025-02-10 09:33:02.756011 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-02-10 09:33:02.756021 | orchestrator | Monday 10 February 2025 09:29:12 +0000 (0:00:00.770) 0:10:44.249 ******* 2025-02-10 09:33:02.756046 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.756053 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.756059 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.756065 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.756071 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.756077 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.756083 | orchestrator | 2025-02-10 09:33:02.756089 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-02-10 09:33:02.756095 | orchestrator | Monday 10 February 2025 09:29:13 +0000 (0:00:01.079) 0:10:45.328 ******* 2025-02-10 09:33:02.756102 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.756108 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.756114 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.756120 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.756126 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.756132 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.756138 | orchestrator | 2025-02-10 09:33:02.756144 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-02-10 09:33:02.756151 | orchestrator | Monday 10 February 2025 09:29:14 +0000 (0:00:00.791) 0:10:46.120 ******* 2025-02-10 09:33:02.756157 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.756163 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.756169 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.756179 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.756185 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.756191 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.756197 | orchestrator | 2025-02-10 09:33:02.756211 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-02-10 09:33:02.756217 | orchestrator | Monday 10 February 2025 09:29:15 +0000 (0:00:01.286) 0:10:47.407 ******* 2025-02-10 09:33:02.756224 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.756230 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.756236 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.756249 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.756255 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.756261 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.756268 | orchestrator | 2025-02-10 09:33:02.756274 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-02-10 09:33:02.756280 | orchestrator | Monday 10 February 2025 09:29:16 +0000 (0:00:00.749) 0:10:48.156 ******* 2025-02-10 09:33:02.756286 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.756292 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.756298 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.756304 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.756311 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.756317 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.756323 | orchestrator | 2025-02-10 09:33:02.756329 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-02-10 09:33:02.756335 | orchestrator | Monday 10 February 2025 09:29:17 +0000 (0:00:01.010) 0:10:49.166 ******* 2025-02-10 09:33:02.756341 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.756347 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.756353 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.756359 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.756365 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.756371 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.756378 | orchestrator | 2025-02-10 09:33:02.756384 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-02-10 09:33:02.756390 | orchestrator | Monday 10 February 2025 09:29:18 +0000 (0:00:00.713) 0:10:49.880 ******* 2025-02-10 09:33:02.756396 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.756402 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.756412 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.756418 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.756424 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.756431 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.756437 | orchestrator | 2025-02-10 09:33:02.756443 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-02-10 09:33:02.756449 | orchestrator | Monday 10 February 2025 09:29:19 +0000 (0:00:01.015) 0:10:50.895 ******* 2025-02-10 09:33:02.756455 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.756461 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.756467 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.756473 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.756479 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.756485 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.756492 | orchestrator | 2025-02-10 09:33:02.756498 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-02-10 09:33:02.756504 | orchestrator | Monday 10 February 2025 09:29:19 +0000 (0:00:00.826) 0:10:51.721 ******* 2025-02-10 09:33:02.756518 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.756529 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.756538 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.756555 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.756565 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.756575 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.756585 | orchestrator | 2025-02-10 09:33:02.756595 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-02-10 09:33:02.756605 | orchestrator | Monday 10 February 2025 09:29:20 +0000 (0:00:01.095) 0:10:52.817 ******* 2025-02-10 09:33:02.756614 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.756624 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.756631 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.756637 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.756643 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.756649 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.756655 | orchestrator | 2025-02-10 09:33:02.756661 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-02-10 09:33:02.756667 | orchestrator | Monday 10 February 2025 09:29:21 +0000 (0:00:00.851) 0:10:53.669 ******* 2025-02-10 09:33:02.756673 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.756679 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.756686 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.756692 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.756698 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.756704 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.756714 | orchestrator | 2025-02-10 09:33:02.756739 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-02-10 09:33:02.756747 | orchestrator | Monday 10 February 2025 09:29:23 +0000 (0:00:01.210) 0:10:54.879 ******* 2025-02-10 09:33:02.756753 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.756760 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.756779 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.756789 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.756799 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.756809 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.756819 | orchestrator | 2025-02-10 09:33:02.756829 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-02-10 09:33:02.756838 | orchestrator | Monday 10 February 2025 09:29:24 +0000 (0:00:00.994) 0:10:55.873 ******* 2025-02-10 09:33:02.756844 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.756860 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.756878 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.756889 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.756906 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.756916 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.756926 | orchestrator | 2025-02-10 09:33:02.756969 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-02-10 09:33:02.756977 | orchestrator | Monday 10 February 2025 09:29:25 +0000 (0:00:01.032) 0:10:56.906 ******* 2025-02-10 09:33:02.756988 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.756994 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.757000 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.757006 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.757012 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.757018 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.757024 | orchestrator | 2025-02-10 09:33:02.757031 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-02-10 09:33:02.757042 | orchestrator | Monday 10 February 2025 09:29:25 +0000 (0:00:00.721) 0:10:57.627 ******* 2025-02-10 09:33:02.757049 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.757055 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.757061 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.757067 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.757073 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.757079 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.757085 | orchestrator | 2025-02-10 09:33:02.757091 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-02-10 09:33:02.757097 | orchestrator | Monday 10 February 2025 09:29:26 +0000 (0:00:01.011) 0:10:58.639 ******* 2025-02-10 09:33:02.757103 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.757109 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.757116 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.757122 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.757128 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.757134 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.757140 | orchestrator | 2025-02-10 09:33:02.757146 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-02-10 09:33:02.757152 | orchestrator | Monday 10 February 2025 09:29:27 +0000 (0:00:00.736) 0:10:59.375 ******* 2025-02-10 09:33:02.757158 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-10 09:33:02.757164 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-10 09:33:02.757171 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.757177 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-10 09:33:02.757183 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-10 09:33:02.757189 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.757195 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-10 09:33:02.757201 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-10 09:33:02.757208 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.757214 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-10 09:33:02.757220 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-10 09:33:02.757226 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.757232 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-10 09:33:02.757238 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-10 09:33:02.757244 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.757250 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-10 09:33:02.757260 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-10 09:33:02.757266 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.757273 | orchestrator | 2025-02-10 09:33:02.757279 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-02-10 09:33:02.757285 | orchestrator | Monday 10 February 2025 09:29:28 +0000 (0:00:01.204) 0:11:00.579 ******* 2025-02-10 09:33:02.757291 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-02-10 09:33:02.757303 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-02-10 09:33:02.757309 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.757315 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-02-10 09:33:02.757322 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-02-10 09:33:02.757328 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.757334 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-02-10 09:33:02.757340 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-02-10 09:33:02.757346 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.757352 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-02-10 09:33:02.757359 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-02-10 09:33:02.757365 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.757371 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-02-10 09:33:02.757377 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-02-10 09:33:02.757383 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.757389 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-02-10 09:33:02.757415 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-02-10 09:33:02.757422 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.757429 | orchestrator | 2025-02-10 09:33:02.757435 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-02-10 09:33:02.757441 | orchestrator | Monday 10 February 2025 09:29:29 +0000 (0:00:00.824) 0:11:01.404 ******* 2025-02-10 09:33:02.757447 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.757453 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.757459 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.757465 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.757471 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.757478 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.757484 | orchestrator | 2025-02-10 09:33:02.757490 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-02-10 09:33:02.757495 | orchestrator | Monday 10 February 2025 09:29:30 +0000 (0:00:01.020) 0:11:02.424 ******* 2025-02-10 09:33:02.757501 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.757507 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.757513 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.757519 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.757529 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.757535 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.757541 | orchestrator | 2025-02-10 09:33:02.757547 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-10 09:33:02.757554 | orchestrator | Monday 10 February 2025 09:29:31 +0000 (0:00:00.798) 0:11:03.222 ******* 2025-02-10 09:33:02.757560 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.757566 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.757571 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.757577 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.757583 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.757589 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.757594 | orchestrator | 2025-02-10 09:33:02.757600 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-10 09:33:02.757606 | orchestrator | Monday 10 February 2025 09:29:32 +0000 (0:00:00.985) 0:11:04.207 ******* 2025-02-10 09:33:02.757612 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.757617 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.757623 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.757629 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.757635 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.757646 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.757652 | orchestrator | 2025-02-10 09:33:02.757657 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-10 09:33:02.757663 | orchestrator | Monday 10 February 2025 09:29:33 +0000 (0:00:00.737) 0:11:04.945 ******* 2025-02-10 09:33:02.757669 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.757675 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.757680 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.757686 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.757692 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.757698 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.757704 | orchestrator | 2025-02-10 09:33:02.757709 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-10 09:33:02.757720 | orchestrator | Monday 10 February 2025 09:29:34 +0000 (0:00:01.102) 0:11:06.048 ******* 2025-02-10 09:33:02.757726 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.757732 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.757737 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.757743 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.757749 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.757755 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.757760 | orchestrator | 2025-02-10 09:33:02.757766 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-10 09:33:02.757772 | orchestrator | Monday 10 February 2025 09:29:35 +0000 (0:00:00.793) 0:11:06.841 ******* 2025-02-10 09:33:02.757778 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-10 09:33:02.757784 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-10 09:33:02.757790 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-10 09:33:02.757796 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.757801 | orchestrator | 2025-02-10 09:33:02.757807 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-10 09:33:02.757813 | orchestrator | Monday 10 February 2025 09:29:35 +0000 (0:00:00.538) 0:11:07.380 ******* 2025-02-10 09:33:02.757831 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-10 09:33:02.757840 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-10 09:33:02.757850 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-10 09:33:02.757859 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.757869 | orchestrator | 2025-02-10 09:33:02.757876 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-10 09:33:02.757882 | orchestrator | Monday 10 February 2025 09:29:36 +0000 (0:00:00.833) 0:11:08.213 ******* 2025-02-10 09:33:02.757888 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-10 09:33:02.757894 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-10 09:33:02.757900 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-10 09:33:02.757905 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.757911 | orchestrator | 2025-02-10 09:33:02.757917 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:33:02.757923 | orchestrator | Monday 10 February 2025 09:29:37 +0000 (0:00:01.050) 0:11:09.263 ******* 2025-02-10 09:33:02.757929 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.757948 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.757954 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.757960 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.757965 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.757971 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.757977 | orchestrator | 2025-02-10 09:33:02.758002 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-10 09:33:02.758009 | orchestrator | Monday 10 February 2025 09:29:38 +0000 (0:00:00.737) 0:11:10.001 ******* 2025-02-10 09:33:02.758036 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-10 09:33:02.758049 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.758055 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-10 09:33:02.758061 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.758067 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-10 09:33:02.758073 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.758079 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-10 09:33:02.758084 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.758090 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-10 09:33:02.758096 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.758102 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-10 09:33:02.758108 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.758113 | orchestrator | 2025-02-10 09:33:02.758119 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-10 09:33:02.758125 | orchestrator | Monday 10 February 2025 09:29:39 +0000 (0:00:01.487) 0:11:11.489 ******* 2025-02-10 09:33:02.758131 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.758137 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.758143 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.758148 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.758154 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.758160 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.758166 | orchestrator | 2025-02-10 09:33:02.758172 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:33:02.758177 | orchestrator | Monday 10 February 2025 09:29:40 +0000 (0:00:00.810) 0:11:12.300 ******* 2025-02-10 09:33:02.758183 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.758189 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.758195 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.758204 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.758210 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.758216 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.758221 | orchestrator | 2025-02-10 09:33:02.758230 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-10 09:33:02.758236 | orchestrator | Monday 10 February 2025 09:29:41 +0000 (0:00:01.012) 0:11:13.313 ******* 2025-02-10 09:33:02.758242 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-10 09:33:02.758248 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.758254 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-10 09:33:02.758260 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-10 09:33:02.758265 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.758271 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.758277 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-10 09:33:02.758283 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.758289 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-10 09:33:02.758295 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.758307 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-10 09:33:02.758313 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.758319 | orchestrator | 2025-02-10 09:33:02.758325 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-10 09:33:02.758331 | orchestrator | Monday 10 February 2025 09:29:43 +0000 (0:00:01.599) 0:11:14.912 ******* 2025-02-10 09:33:02.758337 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.758343 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.758349 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-02-10 09:33:02.758355 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.758361 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-02-10 09:33:02.758373 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.758383 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.758390 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-02-10 09:33:02.758396 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.758401 | orchestrator | 2025-02-10 09:33:02.758407 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-10 09:33:02.758413 | orchestrator | Monday 10 February 2025 09:29:44 +0000 (0:00:01.219) 0:11:16.132 ******* 2025-02-10 09:33:02.758419 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-10 09:33:02.758425 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-10 09:33:02.758430 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-10 09:33:02.758436 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-02-10 09:33:02.758442 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-02-10 09:33:02.758448 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.758454 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-02-10 09:33:02.758459 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-02-10 09:33:02.758465 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-02-10 09:33:02.758471 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.758477 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-02-10 09:33:02.758486 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:33:02.758492 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:33:02.758498 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:33:02.758504 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.758510 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-02-10 09:33:02.758519 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-02-10 09:33:02.758527 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.758536 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-02-10 09:33:02.758545 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.758554 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-02-10 09:33:02.758564 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-02-10 09:33:02.758571 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-02-10 09:33:02.758577 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.758583 | orchestrator | 2025-02-10 09:33:02.758589 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-02-10 09:33:02.758595 | orchestrator | Monday 10 February 2025 09:29:45 +0000 (0:00:01.683) 0:11:17.816 ******* 2025-02-10 09:33:02.758601 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.758607 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.758612 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.758618 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.758624 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.758629 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.758635 | orchestrator | 2025-02-10 09:33:02.758641 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-02-10 09:33:02.758647 | orchestrator | Monday 10 February 2025 09:29:47 +0000 (0:00:01.592) 0:11:19.409 ******* 2025-02-10 09:33:02.758653 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.758658 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.758664 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.758670 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-10 09:33:02.758676 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.758682 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-02-10 09:33:02.758688 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.758694 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-02-10 09:33:02.758706 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.758712 | orchestrator | 2025-02-10 09:33:02.758718 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-02-10 09:33:02.758724 | orchestrator | Monday 10 February 2025 09:29:49 +0000 (0:00:01.569) 0:11:20.978 ******* 2025-02-10 09:33:02.758730 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.758735 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.758741 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.758747 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.758753 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.758758 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.758764 | orchestrator | 2025-02-10 09:33:02.758770 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-02-10 09:33:02.758776 | orchestrator | Monday 10 February 2025 09:29:50 +0000 (0:00:01.497) 0:11:22.475 ******* 2025-02-10 09:33:02.758790 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:02.758796 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:02.758802 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:02.758807 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.758819 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.758825 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.758831 | orchestrator | 2025-02-10 09:33:02.758837 | orchestrator | TASK [ceph-crash : create client.crash keyring] ******************************** 2025-02-10 09:33:02.758843 | orchestrator | Monday 10 February 2025 09:29:52 +0000 (0:00:01.908) 0:11:24.384 ******* 2025-02-10 09:33:02.758849 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:02.758854 | orchestrator | 2025-02-10 09:33:02.758860 | orchestrator | TASK [ceph-crash : get keys from monitors] ************************************* 2025-02-10 09:33:02.758866 | orchestrator | Monday 10 February 2025 09:29:56 +0000 (0:00:03.497) 0:11:27.881 ******* 2025-02-10 09:33:02.758872 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.758878 | orchestrator | 2025-02-10 09:33:02.758884 | orchestrator | TASK [ceph-crash : copy ceph key(s) if needed] ********************************* 2025-02-10 09:33:02.758890 | orchestrator | Monday 10 February 2025 09:29:57 +0000 (0:00:01.853) 0:11:29.735 ******* 2025-02-10 09:33:02.758895 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.758901 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:33:02.758907 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:33:02.758913 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:33:02.758922 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:33:02.758928 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:33:02.758950 | orchestrator | 2025-02-10 09:33:02.758957 | orchestrator | TASK [ceph-crash : create /var/lib/ceph/crash/posted] ************************** 2025-02-10 09:33:02.758963 | orchestrator | Monday 10 February 2025 09:30:00 +0000 (0:00:02.379) 0:11:32.115 ******* 2025-02-10 09:33:02.758968 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:02.758974 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:33:02.758987 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:33:02.758992 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:33:02.758998 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:33:02.759004 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:33:02.759010 | orchestrator | 2025-02-10 09:33:02.759016 | orchestrator | TASK [ceph-crash : include_tasks systemd.yml] ********************************** 2025-02-10 09:33:02.759022 | orchestrator | Monday 10 February 2025 09:30:01 +0000 (0:00:01.580) 0:11:33.695 ******* 2025-02-10 09:33:02.759028 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:33:02.759034 | orchestrator | 2025-02-10 09:33:02.759040 | orchestrator | TASK [ceph-crash : generate systemd unit file for ceph-crash container] ******** 2025-02-10 09:33:02.759045 | orchestrator | Monday 10 February 2025 09:30:03 +0000 (0:00:02.138) 0:11:35.834 ******* 2025-02-10 09:33:02.759051 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:02.759063 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:33:02.759069 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:33:02.759074 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:33:02.759080 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:33:02.759086 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:33:02.759098 | orchestrator | 2025-02-10 09:33:02.759110 | orchestrator | TASK [ceph-crash : start the ceph-crash service] ******************************* 2025-02-10 09:33:02.759116 | orchestrator | Monday 10 February 2025 09:30:06 +0000 (0:00:02.238) 0:11:38.073 ******* 2025-02-10 09:33:02.759122 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:02.759127 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:33:02.759133 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:33:02.759139 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:33:02.759145 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:33:02.759150 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:33:02.759156 | orchestrator | 2025-02-10 09:33:02.759162 | orchestrator | RUNNING HANDLER [ceph-handler : ceph crash handler] **************************** 2025-02-10 09:33:02.759168 | orchestrator | Monday 10 February 2025 09:30:10 +0000 (0:00:04.429) 0:11:42.502 ******* 2025-02-10 09:33:02.759174 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:33:02.759180 | orchestrator | 2025-02-10 09:33:02.759186 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called before restart] ****** 2025-02-10 09:33:02.759192 | orchestrator | Monday 10 February 2025 09:30:12 +0000 (0:00:01.488) 0:11:43.991 ******* 2025-02-10 09:33:02.759197 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.759203 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.759209 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.759215 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.759221 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.759249 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.759256 | orchestrator | 2025-02-10 09:33:02.759265 | orchestrator | RUNNING HANDLER [ceph-handler : restart the ceph-crash service] **************** 2025-02-10 09:33:02.759271 | orchestrator | Monday 10 February 2025 09:30:12 +0000 (0:00:00.697) 0:11:44.689 ******* 2025-02-10 09:33:02.759277 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:02.759283 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:33:02.759289 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:33:02.759295 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:33:02.759300 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:33:02.759306 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:33:02.759312 | orchestrator | 2025-02-10 09:33:02.759318 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called after restart] ******* 2025-02-10 09:33:02.759324 | orchestrator | Monday 10 February 2025 09:30:15 +0000 (0:00:03.035) 0:11:47.725 ******* 2025-02-10 09:33:02.759330 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:02.759336 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:02.759341 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:02.759347 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.759353 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.759359 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.759365 | orchestrator | 2025-02-10 09:33:02.759371 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-02-10 09:33:02.759376 | orchestrator | 2025-02-10 09:33:02.759382 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-02-10 09:33:02.759388 | orchestrator | Monday 10 February 2025 09:30:18 +0000 (0:00:03.004) 0:11:50.729 ******* 2025-02-10 09:33:02.759394 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:33:02.759400 | orchestrator | 2025-02-10 09:33:02.759406 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-02-10 09:33:02.759412 | orchestrator | Monday 10 February 2025 09:30:19 +0000 (0:00:00.886) 0:11:51.616 ******* 2025-02-10 09:33:02.759424 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.759430 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.759436 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.759442 | orchestrator | 2025-02-10 09:33:02.759448 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-02-10 09:33:02.759454 | orchestrator | Monday 10 February 2025 09:30:20 +0000 (0:00:00.437) 0:11:52.053 ******* 2025-02-10 09:33:02.759459 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.759465 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.759471 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.759477 | orchestrator | 2025-02-10 09:33:02.759483 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-02-10 09:33:02.759489 | orchestrator | Monday 10 February 2025 09:30:20 +0000 (0:00:00.759) 0:11:52.813 ******* 2025-02-10 09:33:02.759494 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.759500 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.759506 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.759512 | orchestrator | 2025-02-10 09:33:02.759518 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-02-10 09:33:02.759524 | orchestrator | Monday 10 February 2025 09:30:22 +0000 (0:00:01.222) 0:11:54.036 ******* 2025-02-10 09:33:02.759529 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.759535 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.759541 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.759547 | orchestrator | 2025-02-10 09:33:02.759552 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-02-10 09:33:02.759558 | orchestrator | Monday 10 February 2025 09:30:22 +0000 (0:00:00.755) 0:11:54.791 ******* 2025-02-10 09:33:02.759564 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.759573 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.759579 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.759585 | orchestrator | 2025-02-10 09:33:02.759591 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-02-10 09:33:02.759597 | orchestrator | Monday 10 February 2025 09:30:23 +0000 (0:00:00.360) 0:11:55.152 ******* 2025-02-10 09:33:02.759603 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.759608 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.759614 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.759620 | orchestrator | 2025-02-10 09:33:02.759626 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-02-10 09:33:02.759632 | orchestrator | Monday 10 February 2025 09:30:23 +0000 (0:00:00.352) 0:11:55.505 ******* 2025-02-10 09:33:02.759638 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.759643 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.759649 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.759655 | orchestrator | 2025-02-10 09:33:02.759661 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-02-10 09:33:02.759671 | orchestrator | Monday 10 February 2025 09:30:24 +0000 (0:00:00.642) 0:11:56.147 ******* 2025-02-10 09:33:02.759677 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.759683 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.759689 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.759694 | orchestrator | 2025-02-10 09:33:02.759700 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-02-10 09:33:02.759706 | orchestrator | Monday 10 February 2025 09:30:24 +0000 (0:00:00.472) 0:11:56.620 ******* 2025-02-10 09:33:02.759712 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.759718 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.759724 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.759729 | orchestrator | 2025-02-10 09:33:02.759735 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-02-10 09:33:02.759741 | orchestrator | Monday 10 February 2025 09:30:25 +0000 (0:00:00.392) 0:11:57.012 ******* 2025-02-10 09:33:02.759754 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.759764 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.759770 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.759776 | orchestrator | 2025-02-10 09:33:02.759781 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-02-10 09:33:02.759787 | orchestrator | Monday 10 February 2025 09:30:25 +0000 (0:00:00.406) 0:11:57.419 ******* 2025-02-10 09:33:02.759793 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.759799 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.759805 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.759810 | orchestrator | 2025-02-10 09:33:02.759816 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-02-10 09:33:02.759822 | orchestrator | Monday 10 February 2025 09:30:26 +0000 (0:00:01.167) 0:11:58.587 ******* 2025-02-10 09:33:02.759828 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.759834 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.759840 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.759845 | orchestrator | 2025-02-10 09:33:02.759851 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-02-10 09:33:02.759861 | orchestrator | Monday 10 February 2025 09:30:27 +0000 (0:00:00.378) 0:11:58.966 ******* 2025-02-10 09:33:02.759867 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.759873 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.759878 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.759884 | orchestrator | 2025-02-10 09:33:02.759890 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-02-10 09:33:02.759896 | orchestrator | Monday 10 February 2025 09:30:27 +0000 (0:00:00.361) 0:11:59.327 ******* 2025-02-10 09:33:02.759902 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.759908 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.759913 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.759919 | orchestrator | 2025-02-10 09:33:02.759925 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-02-10 09:33:02.760033 | orchestrator | Monday 10 February 2025 09:30:27 +0000 (0:00:00.412) 0:11:59.739 ******* 2025-02-10 09:33:02.760067 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.760074 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.760079 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.760085 | orchestrator | 2025-02-10 09:33:02.760091 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-02-10 09:33:02.760097 | orchestrator | Monday 10 February 2025 09:30:28 +0000 (0:00:00.881) 0:12:00.620 ******* 2025-02-10 09:33:02.760103 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.760109 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.760114 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.760120 | orchestrator | 2025-02-10 09:33:02.760126 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-02-10 09:33:02.760132 | orchestrator | Monday 10 February 2025 09:30:29 +0000 (0:00:00.471) 0:12:01.092 ******* 2025-02-10 09:33:02.760137 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.760143 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.760149 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.760155 | orchestrator | 2025-02-10 09:33:02.760161 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-02-10 09:33:02.760166 | orchestrator | Monday 10 February 2025 09:30:29 +0000 (0:00:00.390) 0:12:01.482 ******* 2025-02-10 09:33:02.760172 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.760178 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.760184 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.760189 | orchestrator | 2025-02-10 09:33:02.760195 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-02-10 09:33:02.760201 | orchestrator | Monday 10 February 2025 09:30:30 +0000 (0:00:00.423) 0:12:01.906 ******* 2025-02-10 09:33:02.760207 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.760213 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.760218 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.760232 | orchestrator | 2025-02-10 09:33:02.760238 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-02-10 09:33:02.760243 | orchestrator | Monday 10 February 2025 09:30:30 +0000 (0:00:00.718) 0:12:02.624 ******* 2025-02-10 09:33:02.760249 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.760255 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.760262 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.760271 | orchestrator | 2025-02-10 09:33:02.760277 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-02-10 09:33:02.760283 | orchestrator | Monday 10 February 2025 09:30:31 +0000 (0:00:00.429) 0:12:03.053 ******* 2025-02-10 09:33:02.760289 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.760294 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.760300 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.760311 | orchestrator | 2025-02-10 09:33:02.760318 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-02-10 09:33:02.760323 | orchestrator | Monday 10 February 2025 09:30:31 +0000 (0:00:00.364) 0:12:03.417 ******* 2025-02-10 09:33:02.760329 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.760335 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.760347 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.760353 | orchestrator | 2025-02-10 09:33:02.760359 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-02-10 09:33:02.760372 | orchestrator | Monday 10 February 2025 09:30:31 +0000 (0:00:00.344) 0:12:03.762 ******* 2025-02-10 09:33:02.760378 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.760384 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.760390 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.760395 | orchestrator | 2025-02-10 09:33:02.760401 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-02-10 09:33:02.760407 | orchestrator | Monday 10 February 2025 09:30:32 +0000 (0:00:00.683) 0:12:04.445 ******* 2025-02-10 09:33:02.760413 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.760419 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.760424 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.760430 | orchestrator | 2025-02-10 09:33:02.760436 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-02-10 09:33:02.760441 | orchestrator | Monday 10 February 2025 09:30:32 +0000 (0:00:00.390) 0:12:04.836 ******* 2025-02-10 09:33:02.760446 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.760451 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.760456 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.760461 | orchestrator | 2025-02-10 09:33:02.760467 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-02-10 09:33:02.760472 | orchestrator | Monday 10 February 2025 09:30:33 +0000 (0:00:00.393) 0:12:05.230 ******* 2025-02-10 09:33:02.760477 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.760483 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.760488 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.760493 | orchestrator | 2025-02-10 09:33:02.760498 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-02-10 09:33:02.760503 | orchestrator | Monday 10 February 2025 09:30:33 +0000 (0:00:00.342) 0:12:05.573 ******* 2025-02-10 09:33:02.760509 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.760514 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.760519 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.760524 | orchestrator | 2025-02-10 09:33:02.760529 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-02-10 09:33:02.760535 | orchestrator | Monday 10 February 2025 09:30:34 +0000 (0:00:00.722) 0:12:06.295 ******* 2025-02-10 09:33:02.760540 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.760545 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.760551 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.760556 | orchestrator | 2025-02-10 09:33:02.760565 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-02-10 09:33:02.760571 | orchestrator | Monday 10 February 2025 09:30:34 +0000 (0:00:00.375) 0:12:06.670 ******* 2025-02-10 09:33:02.760576 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.760581 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.760586 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.760591 | orchestrator | 2025-02-10 09:33:02.760596 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-02-10 09:33:02.760605 | orchestrator | Monday 10 February 2025 09:30:35 +0000 (0:00:00.359) 0:12:07.030 ******* 2025-02-10 09:33:02.760610 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.760615 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.760620 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.760625 | orchestrator | 2025-02-10 09:33:02.760631 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-02-10 09:33:02.760636 | orchestrator | Monday 10 February 2025 09:30:35 +0000 (0:00:00.372) 0:12:07.402 ******* 2025-02-10 09:33:02.760641 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.760646 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.760651 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.760657 | orchestrator | 2025-02-10 09:33:02.760662 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-02-10 09:33:02.760667 | orchestrator | Monday 10 February 2025 09:30:36 +0000 (0:00:00.661) 0:12:08.064 ******* 2025-02-10 09:33:02.760672 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.760677 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.760682 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.760688 | orchestrator | 2025-02-10 09:33:02.760693 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-02-10 09:33:02.760698 | orchestrator | Monday 10 February 2025 09:30:36 +0000 (0:00:00.383) 0:12:08.448 ******* 2025-02-10 09:33:02.760703 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-10 09:33:02.760709 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-10 09:33:02.760714 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.760719 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-10 09:33:02.760724 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-10 09:33:02.760729 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.760735 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-10 09:33:02.760740 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-10 09:33:02.760745 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.760750 | orchestrator | 2025-02-10 09:33:02.760755 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-02-10 09:33:02.760760 | orchestrator | Monday 10 February 2025 09:30:37 +0000 (0:00:00.392) 0:12:08.841 ******* 2025-02-10 09:33:02.760766 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-02-10 09:33:02.760771 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-02-10 09:33:02.760776 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.760781 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-02-10 09:33:02.760786 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-02-10 09:33:02.760792 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.760797 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-02-10 09:33:02.760802 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-02-10 09:33:02.760807 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.760812 | orchestrator | 2025-02-10 09:33:02.760818 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-02-10 09:33:02.760826 | orchestrator | Monday 10 February 2025 09:30:37 +0000 (0:00:00.430) 0:12:09.271 ******* 2025-02-10 09:33:02.760831 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.760839 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.760845 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.760850 | orchestrator | 2025-02-10 09:33:02.760855 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-02-10 09:33:02.760860 | orchestrator | Monday 10 February 2025 09:30:38 +0000 (0:00:00.700) 0:12:09.972 ******* 2025-02-10 09:33:02.760865 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.760870 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.760876 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.760881 | orchestrator | 2025-02-10 09:33:02.760886 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-10 09:33:02.760895 | orchestrator | Monday 10 February 2025 09:30:38 +0000 (0:00:00.409) 0:12:10.381 ******* 2025-02-10 09:33:02.760900 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.760905 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.760911 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.760916 | orchestrator | 2025-02-10 09:33:02.760921 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-10 09:33:02.760926 | orchestrator | Monday 10 February 2025 09:30:38 +0000 (0:00:00.388) 0:12:10.769 ******* 2025-02-10 09:33:02.760950 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.760959 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.760964 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.760970 | orchestrator | 2025-02-10 09:33:02.760975 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-10 09:33:02.760980 | orchestrator | Monday 10 February 2025 09:30:39 +0000 (0:00:00.378) 0:12:11.148 ******* 2025-02-10 09:33:02.760985 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.760993 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.760999 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.761004 | orchestrator | 2025-02-10 09:33:02.761009 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-10 09:33:02.761014 | orchestrator | Monday 10 February 2025 09:30:40 +0000 (0:00:00.696) 0:12:11.845 ******* 2025-02-10 09:33:02.761019 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.761025 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.761030 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.761035 | orchestrator | 2025-02-10 09:33:02.761040 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-10 09:33:02.761045 | orchestrator | Monday 10 February 2025 09:30:40 +0000 (0:00:00.388) 0:12:12.234 ******* 2025-02-10 09:33:02.761050 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:33:02.761056 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:33:02.761061 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:33:02.761066 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.761072 | orchestrator | 2025-02-10 09:33:02.761077 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-10 09:33:02.761082 | orchestrator | Monday 10 February 2025 09:30:40 +0000 (0:00:00.469) 0:12:12.704 ******* 2025-02-10 09:33:02.761088 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:33:02.761093 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:33:02.761098 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:33:02.761103 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.761108 | orchestrator | 2025-02-10 09:33:02.761114 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-10 09:33:02.761119 | orchestrator | Monday 10 February 2025 09:30:41 +0000 (0:00:00.484) 0:12:13.189 ******* 2025-02-10 09:33:02.761124 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:33:02.761129 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:33:02.761139 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:33:02.761144 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.761149 | orchestrator | 2025-02-10 09:33:02.761155 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:33:02.761160 | orchestrator | Monday 10 February 2025 09:30:41 +0000 (0:00:00.479) 0:12:13.668 ******* 2025-02-10 09:33:02.761165 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.761170 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.761175 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.761181 | orchestrator | 2025-02-10 09:33:02.761186 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-10 09:33:02.761191 | orchestrator | Monday 10 February 2025 09:30:42 +0000 (0:00:00.376) 0:12:14.044 ******* 2025-02-10 09:33:02.761196 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-10 09:33:02.761201 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.761206 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-10 09:33:02.761212 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.761217 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-10 09:33:02.761222 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.761227 | orchestrator | 2025-02-10 09:33:02.761232 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-10 09:33:02.761238 | orchestrator | Monday 10 February 2025 09:30:43 +0000 (0:00:00.936) 0:12:14.981 ******* 2025-02-10 09:33:02.761243 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.761248 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.761253 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.761258 | orchestrator | 2025-02-10 09:33:02.761263 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:33:02.761271 | orchestrator | Monday 10 February 2025 09:30:43 +0000 (0:00:00.374) 0:12:15.356 ******* 2025-02-10 09:33:02.761276 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.761282 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.761287 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.761292 | orchestrator | 2025-02-10 09:33:02.761300 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-10 09:33:02.761306 | orchestrator | Monday 10 February 2025 09:30:43 +0000 (0:00:00.368) 0:12:15.725 ******* 2025-02-10 09:33:02.761311 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-10 09:33:02.761316 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.761322 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-10 09:33:02.761327 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.761332 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-10 09:33:02.761337 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.761342 | orchestrator | 2025-02-10 09:33:02.761348 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-10 09:33:02.761353 | orchestrator | Monday 10 February 2025 09:30:44 +0000 (0:00:00.584) 0:12:16.309 ******* 2025-02-10 09:33:02.761358 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-02-10 09:33:02.761363 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.761369 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-02-10 09:33:02.761374 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.761379 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-02-10 09:33:02.761384 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.761390 | orchestrator | 2025-02-10 09:33:02.761395 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-10 09:33:02.761400 | orchestrator | Monday 10 February 2025 09:30:45 +0000 (0:00:00.773) 0:12:17.082 ******* 2025-02-10 09:33:02.761409 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:33:02.761414 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:33:02.761419 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:33:02.761424 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.761430 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-02-10 09:33:02.761435 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-02-10 09:33:02.761440 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-02-10 09:33:02.761445 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.761451 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-02-10 09:33:02.761456 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-02-10 09:33:02.761461 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-02-10 09:33:02.761466 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.761471 | orchestrator | 2025-02-10 09:33:02.761476 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-02-10 09:33:02.761482 | orchestrator | Monday 10 February 2025 09:30:45 +0000 (0:00:00.722) 0:12:17.804 ******* 2025-02-10 09:33:02.761487 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.761492 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.761497 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.761503 | orchestrator | 2025-02-10 09:33:02.761508 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-02-10 09:33:02.761513 | orchestrator | Monday 10 February 2025 09:30:46 +0000 (0:00:00.865) 0:12:18.670 ******* 2025-02-10 09:33:02.761518 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-10 09:33:02.761523 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.761528 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-02-10 09:33:02.761534 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.761539 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-02-10 09:33:02.761544 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.761549 | orchestrator | 2025-02-10 09:33:02.761555 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-02-10 09:33:02.761560 | orchestrator | Monday 10 February 2025 09:30:47 +0000 (0:00:00.705) 0:12:19.376 ******* 2025-02-10 09:33:02.761565 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.761570 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.761575 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.761580 | orchestrator | 2025-02-10 09:33:02.761586 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-02-10 09:33:02.761591 | orchestrator | Monday 10 February 2025 09:30:48 +0000 (0:00:00.871) 0:12:20.247 ******* 2025-02-10 09:33:02.761596 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.761601 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.761606 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.761611 | orchestrator | 2025-02-10 09:33:02.761617 | orchestrator | TASK [ceph-mds : include create_mds_filesystems.yml] *************************** 2025-02-10 09:33:02.761622 | orchestrator | Monday 10 February 2025 09:30:49 +0000 (0:00:00.682) 0:12:20.929 ******* 2025-02-10 09:33:02.761627 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.761632 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.761638 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-02-10 09:33:02.761643 | orchestrator | 2025-02-10 09:33:02.761648 | orchestrator | TASK [ceph-facts : get current default crush rule details] ********************* 2025-02-10 09:33:02.761653 | orchestrator | Monday 10 February 2025 09:30:49 +0000 (0:00:00.720) 0:12:21.650 ******* 2025-02-10 09:33:02.761658 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-02-10 09:33:02.761664 | orchestrator | 2025-02-10 09:33:02.761669 | orchestrator | TASK [ceph-facts : get current default crush rule name] ************************ 2025-02-10 09:33:02.761678 | orchestrator | Monday 10 February 2025 09:30:51 +0000 (0:00:01.762) 0:12:23.413 ******* 2025-02-10 09:33:02.761687 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-02-10 09:33:02.761695 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.761701 | orchestrator | 2025-02-10 09:33:02.761706 | orchestrator | TASK [ceph-mds : create filesystem pools] ************************************** 2025-02-10 09:33:02.761711 | orchestrator | Monday 10 February 2025 09:30:51 +0000 (0:00:00.406) 0:12:23.819 ******* 2025-02-10 09:33:02.761718 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-02-10 09:33:02.761725 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-02-10 09:33:02.761731 | orchestrator | 2025-02-10 09:33:02.761736 | orchestrator | TASK [ceph-mds : create ceph filesystem] *************************************** 2025-02-10 09:33:02.761741 | orchestrator | Monday 10 February 2025 09:30:58 +0000 (0:00:06.343) 0:12:30.163 ******* 2025-02-10 09:33:02.761746 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-02-10 09:33:02.761751 | orchestrator | 2025-02-10 09:33:02.761757 | orchestrator | TASK [ceph-mds : include common.yml] ******************************************* 2025-02-10 09:33:02.761762 | orchestrator | Monday 10 February 2025 09:31:01 +0000 (0:00:02.899) 0:12:33.063 ******* 2025-02-10 09:33:02.761767 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:33:02.761772 | orchestrator | 2025-02-10 09:33:02.761777 | orchestrator | TASK [ceph-mds : create bootstrap-mds and mds directories] ********************* 2025-02-10 09:33:02.761782 | orchestrator | Monday 10 February 2025 09:31:02 +0000 (0:00:00.927) 0:12:33.990 ******* 2025-02-10 09:33:02.761788 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-02-10 09:33:02.761797 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-02-10 09:33:02.761803 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-02-10 09:33:02.761808 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-02-10 09:33:02.761813 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-02-10 09:33:02.761819 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-02-10 09:33:02.761824 | orchestrator | 2025-02-10 09:33:02.761832 | orchestrator | TASK [ceph-mds : get keys from monitors] *************************************** 2025-02-10 09:33:02.761837 | orchestrator | Monday 10 February 2025 09:31:03 +0000 (0:00:01.148) 0:12:35.139 ******* 2025-02-10 09:33:02.761843 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:33:02.761848 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-10 09:33:02.761853 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-02-10 09:33:02.761858 | orchestrator | 2025-02-10 09:33:02.761863 | orchestrator | TASK [ceph-mds : copy ceph key(s) if needed] *********************************** 2025-02-10 09:33:02.761869 | orchestrator | Monday 10 February 2025 09:31:05 +0000 (0:00:01.787) 0:12:36.927 ******* 2025-02-10 09:33:02.761874 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-02-10 09:33:02.761879 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-10 09:33:02.761884 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:33:02.761889 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-02-10 09:33:02.761894 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-02-10 09:33:02.761904 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:33:02.761909 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-02-10 09:33:02.761914 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-02-10 09:33:02.761919 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:33:02.761924 | orchestrator | 2025-02-10 09:33:02.761930 | orchestrator | TASK [ceph-mds : non_containerized.yml] **************************************** 2025-02-10 09:33:02.761949 | orchestrator | Monday 10 February 2025 09:31:06 +0000 (0:00:01.311) 0:12:38.238 ******* 2025-02-10 09:33:02.761955 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.761960 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.761965 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.761970 | orchestrator | 2025-02-10 09:33:02.761975 | orchestrator | TASK [ceph-mds : containerized.yml] ******************************************** 2025-02-10 09:33:02.761981 | orchestrator | Monday 10 February 2025 09:31:07 +0000 (0:00:00.675) 0:12:38.914 ******* 2025-02-10 09:33:02.761986 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:33:02.761991 | orchestrator | 2025-02-10 09:33:02.761996 | orchestrator | TASK [ceph-mds : include_tasks systemd.yml] ************************************ 2025-02-10 09:33:02.762002 | orchestrator | Monday 10 February 2025 09:31:07 +0000 (0:00:00.685) 0:12:39.599 ******* 2025-02-10 09:33:02.762007 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:33:02.762033 | orchestrator | 2025-02-10 09:33:02.762039 | orchestrator | TASK [ceph-mds : generate systemd unit file] *********************************** 2025-02-10 09:33:02.762045 | orchestrator | Monday 10 February 2025 09:31:08 +0000 (0:00:00.955) 0:12:40.554 ******* 2025-02-10 09:33:02.762050 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:33:02.762058 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:33:02.762064 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:33:02.762069 | orchestrator | 2025-02-10 09:33:02.762077 | orchestrator | TASK [ceph-mds : generate systemd ceph-mds target file] ************************ 2025-02-10 09:33:02.762083 | orchestrator | Monday 10 February 2025 09:31:10 +0000 (0:00:01.505) 0:12:42.060 ******* 2025-02-10 09:33:02.762088 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:33:02.762093 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:33:02.762098 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:33:02.762103 | orchestrator | 2025-02-10 09:33:02.762109 | orchestrator | TASK [ceph-mds : enable ceph-mds.target] *************************************** 2025-02-10 09:33:02.762114 | orchestrator | Monday 10 February 2025 09:31:11 +0000 (0:00:01.548) 0:12:43.608 ******* 2025-02-10 09:33:02.762119 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:33:02.762124 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:33:02.762130 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:33:02.762135 | orchestrator | 2025-02-10 09:33:02.762140 | orchestrator | TASK [ceph-mds : systemd start mds container] ********************************** 2025-02-10 09:33:02.762145 | orchestrator | Monday 10 February 2025 09:31:13 +0000 (0:00:02.009) 0:12:45.617 ******* 2025-02-10 09:33:02.762151 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:33:02.762156 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:33:02.762161 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:33:02.762166 | orchestrator | 2025-02-10 09:33:02.762172 | orchestrator | TASK [ceph-mds : wait for mds socket to exist] ********************************* 2025-02-10 09:33:02.762177 | orchestrator | Monday 10 February 2025 09:31:15 +0000 (0:00:02.164) 0:12:47.782 ******* 2025-02-10 09:33:02.762182 | orchestrator | FAILED - RETRYING: [testbed-node-3]: wait for mds socket to exist (5 retries left). 2025-02-10 09:33:02.762187 | orchestrator | FAILED - RETRYING: [testbed-node-4]: wait for mds socket to exist (5 retries left). 2025-02-10 09:33:02.762193 | orchestrator | FAILED - RETRYING: [testbed-node-5]: wait for mds socket to exist (5 retries left). 2025-02-10 09:33:02.762198 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.762203 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.762213 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.762218 | orchestrator | 2025-02-10 09:33:02.762223 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-02-10 09:33:02.762228 | orchestrator | Monday 10 February 2025 09:31:33 +0000 (0:00:17.150) 0:13:04.933 ******* 2025-02-10 09:33:02.762234 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:33:02.762239 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:33:02.762244 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:33:02.762249 | orchestrator | 2025-02-10 09:33:02.762254 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-02-10 09:33:02.762260 | orchestrator | Monday 10 February 2025 09:31:33 +0000 (0:00:00.750) 0:13:05.683 ******* 2025-02-10 09:33:02.762265 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:33:02.762270 | orchestrator | 2025-02-10 09:33:02.762275 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called before restart] ******** 2025-02-10 09:33:02.762280 | orchestrator | Monday 10 February 2025 09:31:34 +0000 (0:00:00.852) 0:13:06.536 ******* 2025-02-10 09:33:02.762286 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.762291 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.762296 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.762301 | orchestrator | 2025-02-10 09:33:02.762306 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-02-10 09:33:02.762312 | orchestrator | Monday 10 February 2025 09:31:35 +0000 (0:00:00.393) 0:13:06.930 ******* 2025-02-10 09:33:02.762317 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:33:02.762322 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:33:02.762327 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:33:02.762332 | orchestrator | 2025-02-10 09:33:02.762341 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mds daemon(s)] ******************** 2025-02-10 09:33:02.762346 | orchestrator | Monday 10 February 2025 09:31:36 +0000 (0:00:01.338) 0:13:08.269 ******* 2025-02-10 09:33:02.762351 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:33:02.762357 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:33:02.762362 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:33:02.762367 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.762372 | orchestrator | 2025-02-10 09:33:02.762378 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-02-10 09:33:02.762383 | orchestrator | Monday 10 February 2025 09:31:37 +0000 (0:00:01.248) 0:13:09.517 ******* 2025-02-10 09:33:02.762388 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.762393 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.762398 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.762404 | orchestrator | 2025-02-10 09:33:02.762409 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-02-10 09:33:02.762414 | orchestrator | Monday 10 February 2025 09:31:38 +0000 (0:00:00.398) 0:13:09.916 ******* 2025-02-10 09:33:02.762419 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:33:02.762424 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:33:02.762430 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:33:02.762435 | orchestrator | 2025-02-10 09:33:02.762440 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-02-10 09:33:02.762445 | orchestrator | 2025-02-10 09:33:02.762450 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-02-10 09:33:02.762455 | orchestrator | Monday 10 February 2025 09:31:40 +0000 (0:00:02.176) 0:13:12.093 ******* 2025-02-10 09:33:02.762461 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:33:02.762466 | orchestrator | 2025-02-10 09:33:02.762471 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-02-10 09:33:02.762476 | orchestrator | Monday 10 February 2025 09:31:41 +0000 (0:00:00.809) 0:13:12.903 ******* 2025-02-10 09:33:02.762485 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.762491 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.762496 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.762501 | orchestrator | 2025-02-10 09:33:02.762509 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-02-10 09:33:02.762514 | orchestrator | Monday 10 February 2025 09:31:41 +0000 (0:00:00.364) 0:13:13.267 ******* 2025-02-10 09:33:02.762519 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.762525 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.762530 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.762535 | orchestrator | 2025-02-10 09:33:02.762540 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-02-10 09:33:02.762545 | orchestrator | Monday 10 February 2025 09:31:42 +0000 (0:00:00.758) 0:13:14.026 ******* 2025-02-10 09:33:02.762551 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.762556 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.762561 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.762566 | orchestrator | 2025-02-10 09:33:02.762571 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-02-10 09:33:02.762577 | orchestrator | Monday 10 February 2025 09:31:43 +0000 (0:00:00.958) 0:13:14.985 ******* 2025-02-10 09:33:02.762582 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.762587 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.762592 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.762597 | orchestrator | 2025-02-10 09:33:02.762602 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-02-10 09:33:02.762608 | orchestrator | Monday 10 February 2025 09:31:43 +0000 (0:00:00.760) 0:13:15.745 ******* 2025-02-10 09:33:02.762613 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.762618 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.762623 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.762628 | orchestrator | 2025-02-10 09:33:02.762633 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-02-10 09:33:02.762639 | orchestrator | Monday 10 February 2025 09:31:44 +0000 (0:00:00.349) 0:13:16.094 ******* 2025-02-10 09:33:02.762644 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.762649 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.762654 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.762659 | orchestrator | 2025-02-10 09:33:02.762665 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-02-10 09:33:02.762670 | orchestrator | Monday 10 February 2025 09:31:44 +0000 (0:00:00.332) 0:13:16.427 ******* 2025-02-10 09:33:02.762675 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.762680 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.762686 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.762694 | orchestrator | 2025-02-10 09:33:02.762700 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-02-10 09:33:02.762705 | orchestrator | Monday 10 February 2025 09:31:45 +0000 (0:00:00.698) 0:13:17.125 ******* 2025-02-10 09:33:02.762710 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.762716 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.762725 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.762731 | orchestrator | 2025-02-10 09:33:02.762736 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-02-10 09:33:02.762741 | orchestrator | Monday 10 February 2025 09:31:45 +0000 (0:00:00.375) 0:13:17.500 ******* 2025-02-10 09:33:02.762747 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.762752 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.762757 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.762767 | orchestrator | 2025-02-10 09:33:02.762772 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-02-10 09:33:02.762778 | orchestrator | Monday 10 February 2025 09:31:46 +0000 (0:00:00.383) 0:13:17.884 ******* 2025-02-10 09:33:02.762785 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.762796 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.762802 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.762807 | orchestrator | 2025-02-10 09:33:02.762812 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-02-10 09:33:02.762817 | orchestrator | Monday 10 February 2025 09:31:46 +0000 (0:00:00.383) 0:13:18.268 ******* 2025-02-10 09:33:02.762822 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.762828 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.762833 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.762838 | orchestrator | 2025-02-10 09:33:02.762846 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-02-10 09:33:02.762852 | orchestrator | Monday 10 February 2025 09:31:47 +0000 (0:00:01.118) 0:13:19.386 ******* 2025-02-10 09:33:02.762857 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.762862 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.762867 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.762872 | orchestrator | 2025-02-10 09:33:02.762877 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-02-10 09:33:02.762883 | orchestrator | Monday 10 February 2025 09:31:47 +0000 (0:00:00.409) 0:13:19.796 ******* 2025-02-10 09:33:02.762888 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.762893 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.762898 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.762903 | orchestrator | 2025-02-10 09:33:02.762909 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-02-10 09:33:02.762914 | orchestrator | Monday 10 February 2025 09:31:48 +0000 (0:00:00.345) 0:13:20.142 ******* 2025-02-10 09:33:02.762919 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.762924 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.762929 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.762952 | orchestrator | 2025-02-10 09:33:02.762958 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-02-10 09:33:02.762963 | orchestrator | Monday 10 February 2025 09:31:48 +0000 (0:00:00.367) 0:13:20.509 ******* 2025-02-10 09:33:02.762968 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.762974 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.762979 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.762984 | orchestrator | 2025-02-10 09:33:02.762989 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-02-10 09:33:02.762994 | orchestrator | Monday 10 February 2025 09:31:49 +0000 (0:00:00.699) 0:13:21.209 ******* 2025-02-10 09:33:02.763000 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.763005 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.763010 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.763015 | orchestrator | 2025-02-10 09:33:02.763020 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-02-10 09:33:02.763028 | orchestrator | Monday 10 February 2025 09:31:49 +0000 (0:00:00.377) 0:13:21.586 ******* 2025-02-10 09:33:02.763034 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.763039 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.763044 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.763050 | orchestrator | 2025-02-10 09:33:02.763055 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-02-10 09:33:02.763060 | orchestrator | Monday 10 February 2025 09:31:50 +0000 (0:00:00.335) 0:13:21.922 ******* 2025-02-10 09:33:02.763065 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.763071 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.763076 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.763081 | orchestrator | 2025-02-10 09:33:02.763086 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-02-10 09:33:02.763091 | orchestrator | Monday 10 February 2025 09:31:50 +0000 (0:00:00.349) 0:13:22.271 ******* 2025-02-10 09:33:02.763096 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.763102 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.763107 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.763125 | orchestrator | 2025-02-10 09:33:02.763131 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-02-10 09:33:02.763136 | orchestrator | Monday 10 February 2025 09:31:51 +0000 (0:00:00.659) 0:13:22.931 ******* 2025-02-10 09:33:02.763141 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.763146 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.763151 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.763157 | orchestrator | 2025-02-10 09:33:02.763162 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-02-10 09:33:02.763167 | orchestrator | Monday 10 February 2025 09:31:51 +0000 (0:00:00.389) 0:13:23.321 ******* 2025-02-10 09:33:02.763172 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.763178 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.763183 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.763188 | orchestrator | 2025-02-10 09:33:02.763193 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-02-10 09:33:02.763198 | orchestrator | Monday 10 February 2025 09:31:51 +0000 (0:00:00.369) 0:13:23.690 ******* 2025-02-10 09:33:02.763203 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.763209 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.763214 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.763219 | orchestrator | 2025-02-10 09:33:02.763224 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-02-10 09:33:02.763229 | orchestrator | Monday 10 February 2025 09:31:52 +0000 (0:00:00.380) 0:13:24.070 ******* 2025-02-10 09:33:02.763235 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.763240 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.763245 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.763250 | orchestrator | 2025-02-10 09:33:02.763255 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-02-10 09:33:02.763261 | orchestrator | Monday 10 February 2025 09:31:52 +0000 (0:00:00.655) 0:13:24.726 ******* 2025-02-10 09:33:02.763266 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.763271 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.763276 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.763281 | orchestrator | 2025-02-10 09:33:02.763287 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-02-10 09:33:02.763292 | orchestrator | Monday 10 February 2025 09:31:53 +0000 (0:00:00.375) 0:13:25.102 ******* 2025-02-10 09:33:02.763297 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.763305 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.763311 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.763316 | orchestrator | 2025-02-10 09:33:02.763321 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-02-10 09:33:02.763326 | orchestrator | Monday 10 February 2025 09:31:53 +0000 (0:00:00.367) 0:13:25.469 ******* 2025-02-10 09:33:02.763332 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.763337 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.763342 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.763347 | orchestrator | 2025-02-10 09:33:02.763352 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-02-10 09:33:02.763358 | orchestrator | Monday 10 February 2025 09:31:53 +0000 (0:00:00.340) 0:13:25.809 ******* 2025-02-10 09:33:02.763363 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.763368 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.763373 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.763378 | orchestrator | 2025-02-10 09:33:02.763383 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-02-10 09:33:02.763392 | orchestrator | Monday 10 February 2025 09:31:54 +0000 (0:00:00.699) 0:13:26.509 ******* 2025-02-10 09:33:02.763397 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.763403 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.763408 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.763413 | orchestrator | 2025-02-10 09:33:02.763423 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-02-10 09:33:02.763428 | orchestrator | Monday 10 February 2025 09:31:55 +0000 (0:00:00.379) 0:13:26.889 ******* 2025-02-10 09:33:02.763433 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.763439 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.763444 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.763449 | orchestrator | 2025-02-10 09:33:02.763454 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-02-10 09:33:02.763460 | orchestrator | Monday 10 February 2025 09:31:55 +0000 (0:00:00.417) 0:13:27.307 ******* 2025-02-10 09:33:02.763465 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.763470 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.763475 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.763480 | orchestrator | 2025-02-10 09:33:02.763486 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-02-10 09:33:02.763491 | orchestrator | Monday 10 February 2025 09:31:55 +0000 (0:00:00.387) 0:13:27.694 ******* 2025-02-10 09:33:02.763496 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.763501 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.763506 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.763512 | orchestrator | 2025-02-10 09:33:02.763520 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-02-10 09:33:02.763526 | orchestrator | Monday 10 February 2025 09:31:56 +0000 (0:00:00.655) 0:13:28.350 ******* 2025-02-10 09:33:02.763531 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.763536 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.763542 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.763547 | orchestrator | 2025-02-10 09:33:02.763552 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-02-10 09:33:02.763557 | orchestrator | Monday 10 February 2025 09:31:56 +0000 (0:00:00.396) 0:13:28.746 ******* 2025-02-10 09:33:02.763563 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-10 09:33:02.763568 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-10 09:33:02.763573 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.763578 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-10 09:33:02.763583 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-10 09:33:02.763589 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.763594 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-10 09:33:02.763601 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-10 09:33:02.763607 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.763612 | orchestrator | 2025-02-10 09:33:02.763617 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-02-10 09:33:02.763622 | orchestrator | Monday 10 February 2025 09:31:57 +0000 (0:00:00.436) 0:13:29.183 ******* 2025-02-10 09:33:02.763628 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-02-10 09:33:02.763633 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-02-10 09:33:02.763638 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.763643 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-02-10 09:33:02.763648 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-02-10 09:33:02.763654 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.763659 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-02-10 09:33:02.763664 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-02-10 09:33:02.763669 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.763674 | orchestrator | 2025-02-10 09:33:02.763680 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-02-10 09:33:02.763685 | orchestrator | Monday 10 February 2025 09:31:57 +0000 (0:00:00.480) 0:13:29.664 ******* 2025-02-10 09:33:02.763690 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.763700 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.763705 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.763711 | orchestrator | 2025-02-10 09:33:02.763716 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-02-10 09:33:02.763721 | orchestrator | Monday 10 February 2025 09:31:58 +0000 (0:00:00.691) 0:13:30.355 ******* 2025-02-10 09:33:02.763726 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.763731 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.763737 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.763742 | orchestrator | 2025-02-10 09:33:02.763747 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-10 09:33:02.763752 | orchestrator | Monday 10 February 2025 09:31:58 +0000 (0:00:00.363) 0:13:30.719 ******* 2025-02-10 09:33:02.763757 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.763763 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.763771 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.763776 | orchestrator | 2025-02-10 09:33:02.763781 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-10 09:33:02.763786 | orchestrator | Monday 10 February 2025 09:31:59 +0000 (0:00:00.397) 0:13:31.116 ******* 2025-02-10 09:33:02.763791 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.763796 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.763802 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.763807 | orchestrator | 2025-02-10 09:33:02.763812 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-10 09:33:02.763817 | orchestrator | Monday 10 February 2025 09:31:59 +0000 (0:00:00.377) 0:13:31.494 ******* 2025-02-10 09:33:02.763822 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.763828 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.763833 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.763838 | orchestrator | 2025-02-10 09:33:02.763843 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-10 09:33:02.763849 | orchestrator | Monday 10 February 2025 09:32:00 +0000 (0:00:00.694) 0:13:32.188 ******* 2025-02-10 09:33:02.763854 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.763859 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.763864 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.763871 | orchestrator | 2025-02-10 09:33:02.763880 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-10 09:33:02.763885 | orchestrator | Monday 10 February 2025 09:32:00 +0000 (0:00:00.483) 0:13:32.671 ******* 2025-02-10 09:33:02.763922 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:33:02.763968 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:33:02.763975 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:33:02.763980 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.763985 | orchestrator | 2025-02-10 09:33:02.763991 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-10 09:33:02.763996 | orchestrator | Monday 10 February 2025 09:32:01 +0000 (0:00:00.517) 0:13:33.189 ******* 2025-02-10 09:33:02.764001 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:33:02.764006 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:33:02.764012 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:33:02.764017 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.764022 | orchestrator | 2025-02-10 09:33:02.764031 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-10 09:33:02.764037 | orchestrator | Monday 10 February 2025 09:32:01 +0000 (0:00:00.494) 0:13:33.684 ******* 2025-02-10 09:33:02.764042 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:33:02.764047 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:33:02.764057 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:33:02.764062 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.764071 | orchestrator | 2025-02-10 09:33:02.764077 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:33:02.764082 | orchestrator | Monday 10 February 2025 09:32:02 +0000 (0:00:00.481) 0:13:34.165 ******* 2025-02-10 09:33:02.764087 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.764092 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.764098 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.764103 | orchestrator | 2025-02-10 09:33:02.764108 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-10 09:33:02.764113 | orchestrator | Monday 10 February 2025 09:32:02 +0000 (0:00:00.358) 0:13:34.524 ******* 2025-02-10 09:33:02.764119 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-10 09:33:02.764124 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.764129 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-10 09:33:02.764134 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.764140 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-10 09:33:02.764145 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.764150 | orchestrator | 2025-02-10 09:33:02.764155 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-10 09:33:02.764163 | orchestrator | Monday 10 February 2025 09:32:03 +0000 (0:00:00.890) 0:13:35.415 ******* 2025-02-10 09:33:02.764169 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.764174 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.764179 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.764184 | orchestrator | 2025-02-10 09:33:02.764190 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:33:02.764195 | orchestrator | Monday 10 February 2025 09:32:03 +0000 (0:00:00.407) 0:13:35.822 ******* 2025-02-10 09:33:02.764200 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.764205 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.764211 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.764216 | orchestrator | 2025-02-10 09:33:02.764221 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-10 09:33:02.764226 | orchestrator | Monday 10 February 2025 09:32:04 +0000 (0:00:00.376) 0:13:36.198 ******* 2025-02-10 09:33:02.764232 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-10 09:33:02.764237 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.764242 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-10 09:33:02.764248 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.764253 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-10 09:33:02.764258 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.764263 | orchestrator | 2025-02-10 09:33:02.764269 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-10 09:33:02.764274 | orchestrator | Monday 10 February 2025 09:32:04 +0000 (0:00:00.484) 0:13:36.683 ******* 2025-02-10 09:33:02.764279 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-02-10 09:33:02.764285 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.764290 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-02-10 09:33:02.764296 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.764301 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-02-10 09:33:02.764306 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.764312 | orchestrator | 2025-02-10 09:33:02.764317 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-10 09:33:02.764322 | orchestrator | Monday 10 February 2025 09:32:05 +0000 (0:00:00.686) 0:13:37.370 ******* 2025-02-10 09:33:02.764331 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:33:02.764336 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:33:02.764342 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:33:02.764347 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-02-10 09:33:02.764352 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-02-10 09:33:02.764357 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-02-10 09:33:02.764363 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.764368 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.764373 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-02-10 09:33:02.764378 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-02-10 09:33:02.764383 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-02-10 09:33:02.764389 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.764394 | orchestrator | 2025-02-10 09:33:02.764399 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-02-10 09:33:02.764404 | orchestrator | Monday 10 February 2025 09:32:06 +0000 (0:00:00.700) 0:13:38.070 ******* 2025-02-10 09:33:02.764410 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.764415 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.764420 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.764425 | orchestrator | 2025-02-10 09:33:02.764431 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-02-10 09:33:02.764436 | orchestrator | Monday 10 February 2025 09:32:07 +0000 (0:00:00.875) 0:13:38.946 ******* 2025-02-10 09:33:02.764441 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-10 09:33:02.764449 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.764455 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-02-10 09:33:02.764460 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.764465 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-02-10 09:33:02.764470 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.764476 | orchestrator | 2025-02-10 09:33:02.764481 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-02-10 09:33:02.764486 | orchestrator | Monday 10 February 2025 09:32:07 +0000 (0:00:00.672) 0:13:39.618 ******* 2025-02-10 09:33:02.764491 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.764496 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.764502 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.764507 | orchestrator | 2025-02-10 09:33:02.764512 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-02-10 09:33:02.764517 | orchestrator | Monday 10 February 2025 09:32:08 +0000 (0:00:00.874) 0:13:40.493 ******* 2025-02-10 09:33:02.764523 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.764528 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.764533 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.764538 | orchestrator | 2025-02-10 09:33:02.764543 | orchestrator | TASK [ceph-rgw : include common.yml] ******************************************* 2025-02-10 09:33:02.764549 | orchestrator | Monday 10 February 2025 09:32:09 +0000 (0:00:00.601) 0:13:41.094 ******* 2025-02-10 09:33:02.764554 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:33:02.764559 | orchestrator | 2025-02-10 09:33:02.764565 | orchestrator | TASK [ceph-rgw : create rados gateway directories] ***************************** 2025-02-10 09:33:02.764570 | orchestrator | Monday 10 February 2025 09:32:10 +0000 (0:00:00.886) 0:13:41.980 ******* 2025-02-10 09:33:02.764575 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2025-02-10 09:33:02.764580 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2025-02-10 09:33:02.764586 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2025-02-10 09:33:02.764591 | orchestrator | 2025-02-10 09:33:02.764596 | orchestrator | TASK [ceph-rgw : get keys from monitors] *************************************** 2025-02-10 09:33:02.764604 | orchestrator | Monday 10 February 2025 09:32:10 +0000 (0:00:00.740) 0:13:42.721 ******* 2025-02-10 09:33:02.764610 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:33:02.764615 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-10 09:33:02.764620 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-02-10 09:33:02.764625 | orchestrator | 2025-02-10 09:33:02.764631 | orchestrator | TASK [ceph-rgw : copy ceph key(s) if needed] *********************************** 2025-02-10 09:33:02.764636 | orchestrator | Monday 10 February 2025 09:32:12 +0000 (0:00:01.675) 0:13:44.397 ******* 2025-02-10 09:33:02.764641 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-02-10 09:33:02.764646 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-10 09:33:02.764652 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:33:02.764657 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-02-10 09:33:02.764662 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-02-10 09:33:02.764667 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:33:02.764672 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-02-10 09:33:02.764678 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-02-10 09:33:02.764683 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:33:02.764688 | orchestrator | 2025-02-10 09:33:02.764693 | orchestrator | TASK [ceph-rgw : copy SSL certificate & key data to certificate path] ********** 2025-02-10 09:33:02.764699 | orchestrator | Monday 10 February 2025 09:32:13 +0000 (0:00:01.238) 0:13:45.635 ******* 2025-02-10 09:33:02.764704 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.764709 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.764714 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.764720 | orchestrator | 2025-02-10 09:33:02.764725 | orchestrator | TASK [ceph-rgw : include_tasks pre_requisite.yml] ****************************** 2025-02-10 09:33:02.764730 | orchestrator | Monday 10 February 2025 09:32:14 +0000 (0:00:00.575) 0:13:46.210 ******* 2025-02-10 09:33:02.764735 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.764741 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.764750 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.764756 | orchestrator | 2025-02-10 09:33:02.764761 | orchestrator | TASK [ceph-rgw : rgw pool creation tasks] ************************************** 2025-02-10 09:33:02.764766 | orchestrator | Monday 10 February 2025 09:32:14 +0000 (0:00:00.360) 0:13:46.571 ******* 2025-02-10 09:33:02.764775 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-02-10 09:33:02.764780 | orchestrator | 2025-02-10 09:33:02.764786 | orchestrator | TASK [ceph-rgw : create ec profile] ******************************************** 2025-02-10 09:33:02.764791 | orchestrator | Monday 10 February 2025 09:32:14 +0000 (0:00:00.260) 0:13:46.831 ******* 2025-02-10 09:33:02.764796 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-10 09:33:02.764802 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-10 09:33:02.764807 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-10 09:33:02.764812 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-10 09:33:02.764818 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-10 09:33:02.764823 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.764828 | orchestrator | 2025-02-10 09:33:02.764836 | orchestrator | TASK [ceph-rgw : set crush rule] *********************************************** 2025-02-10 09:33:02.764842 | orchestrator | Monday 10 February 2025 09:32:16 +0000 (0:00:01.037) 0:13:47.869 ******* 2025-02-10 09:33:02.764847 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-10 09:33:02.764859 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-10 09:33:02.764865 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-10 09:33:02.764870 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-10 09:33:02.764875 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-10 09:33:02.764881 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.764886 | orchestrator | 2025-02-10 09:33:02.764891 | orchestrator | TASK [ceph-rgw : create ec pools for rgw] ************************************** 2025-02-10 09:33:02.764896 | orchestrator | Monday 10 February 2025 09:32:16 +0000 (0:00:00.910) 0:13:48.779 ******* 2025-02-10 09:33:02.764902 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-10 09:33:02.764907 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-10 09:33:02.764912 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-10 09:33:02.764917 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-10 09:33:02.764923 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-10 09:33:02.764928 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.764948 | orchestrator | 2025-02-10 09:33:02.764958 | orchestrator | TASK [ceph-rgw : create replicated pools for rgw] ****************************** 2025-02-10 09:33:02.764967 | orchestrator | Monday 10 February 2025 09:32:17 +0000 (0:00:00.731) 0:13:49.511 ******* 2025-02-10 09:33:02.764975 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-02-10 09:33:02.764985 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-02-10 09:33:02.764991 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-02-10 09:33:02.764997 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-02-10 09:33:02.765003 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-02-10 09:33:02.765008 | orchestrator | 2025-02-10 09:33:02.765013 | orchestrator | TASK [ceph-rgw : include_tasks openstack-keystone.yml] ************************* 2025-02-10 09:33:02.765019 | orchestrator | Monday 10 February 2025 09:32:41 +0000 (0:00:24.233) 0:14:13.744 ******* 2025-02-10 09:33:02.765024 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.765029 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.765034 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.765040 | orchestrator | 2025-02-10 09:33:02.765045 | orchestrator | TASK [ceph-rgw : include_tasks start_radosgw.yml] ****************************** 2025-02-10 09:33:02.765050 | orchestrator | Monday 10 February 2025 09:32:42 +0000 (0:00:00.533) 0:14:14.278 ******* 2025-02-10 09:33:02.765055 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.765060 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.765069 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.765079 | orchestrator | 2025-02-10 09:33:02.765084 | orchestrator | TASK [ceph-rgw : include start_docker_rgw.yml] ********************************* 2025-02-10 09:33:02.765089 | orchestrator | Monday 10 February 2025 09:32:42 +0000 (0:00:00.366) 0:14:14.644 ******* 2025-02-10 09:33:02.765095 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:33:02.765100 | orchestrator | 2025-02-10 09:33:02.765105 | orchestrator | TASK [ceph-rgw : include_task systemd.yml] ************************************* 2025-02-10 09:33:02.765111 | orchestrator | Monday 10 February 2025 09:32:43 +0000 (0:00:00.659) 0:14:15.304 ******* 2025-02-10 09:33:02.765116 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:33:02.765121 | orchestrator | 2025-02-10 09:33:02.765127 | orchestrator | TASK [ceph-rgw : generate systemd unit file] *********************************** 2025-02-10 09:33:02.765132 | orchestrator | Monday 10 February 2025 09:32:44 +0000 (0:00:00.922) 0:14:16.226 ******* 2025-02-10 09:33:02.765137 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:33:02.765142 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:33:02.765150 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:33:02.765156 | orchestrator | 2025-02-10 09:33:02.765161 | orchestrator | TASK [ceph-rgw : generate systemd ceph-radosgw target file] ******************** 2025-02-10 09:33:02.765166 | orchestrator | Monday 10 February 2025 09:32:45 +0000 (0:00:01.335) 0:14:17.562 ******* 2025-02-10 09:33:02.765171 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:33:02.765177 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:33:02.765182 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:33:02.765187 | orchestrator | 2025-02-10 09:33:02.765192 | orchestrator | TASK [ceph-rgw : enable ceph-radosgw.target] *********************************** 2025-02-10 09:33:02.765197 | orchestrator | Monday 10 February 2025 09:32:46 +0000 (0:00:01.189) 0:14:18.752 ******* 2025-02-10 09:33:02.765203 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:33:02.765208 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:33:02.765213 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:33:02.765218 | orchestrator | 2025-02-10 09:33:02.765223 | orchestrator | TASK [ceph-rgw : systemd start rgw container] ********************************** 2025-02-10 09:33:02.765228 | orchestrator | Monday 10 February 2025 09:32:49 +0000 (0:00:02.315) 0:14:21.068 ******* 2025-02-10 09:33:02.765234 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-02-10 09:33:02.765239 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-02-10 09:33:02.765244 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-02-10 09:33:02.765249 | orchestrator | 2025-02-10 09:33:02.765255 | orchestrator | TASK [ceph-rgw : include_tasks multisite/main.yml] ***************************** 2025-02-10 09:33:02.765260 | orchestrator | Monday 10 February 2025 09:32:51 +0000 (0:00:02.040) 0:14:23.108 ******* 2025-02-10 09:33:02.765265 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.765270 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:02.765275 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:02.765280 | orchestrator | 2025-02-10 09:33:02.765286 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-02-10 09:33:02.765291 | orchestrator | Monday 10 February 2025 09:32:52 +0000 (0:00:01.356) 0:14:24.465 ******* 2025-02-10 09:33:02.765296 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:33:02.765301 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:33:02.765306 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:33:02.765311 | orchestrator | 2025-02-10 09:33:02.765317 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-02-10 09:33:02.765322 | orchestrator | Monday 10 February 2025 09:32:53 +0000 (0:00:00.762) 0:14:25.227 ******* 2025-02-10 09:33:02.765331 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:33:02.765336 | orchestrator | 2025-02-10 09:33:02.765345 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-02-10 09:33:02.765350 | orchestrator | Monday 10 February 2025 09:32:54 +0000 (0:00:00.916) 0:14:26.143 ******* 2025-02-10 09:33:02.765355 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.765360 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.765366 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.765371 | orchestrator | 2025-02-10 09:33:02.765376 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-02-10 09:33:02.765381 | orchestrator | Monday 10 February 2025 09:32:54 +0000 (0:00:00.367) 0:14:26.511 ******* 2025-02-10 09:33:02.765386 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:33:02.765392 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:33:02.765397 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:33:02.765402 | orchestrator | 2025-02-10 09:33:02.765407 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-02-10 09:33:02.765412 | orchestrator | Monday 10 February 2025 09:32:56 +0000 (0:00:01.619) 0:14:28.131 ******* 2025-02-10 09:33:02.765418 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:33:02.765423 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:33:02.765428 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:33:02.765433 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:02.765438 | orchestrator | 2025-02-10 09:33:02.765444 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-02-10 09:33:02.765450 | orchestrator | Monday 10 February 2025 09:32:57 +0000 (0:00:00.744) 0:14:28.875 ******* 2025-02-10 09:33:02.765455 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:02.765461 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:02.765467 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:02.765473 | orchestrator | 2025-02-10 09:33:02.765478 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-02-10 09:33:02.765484 | orchestrator | Monday 10 February 2025 09:32:57 +0000 (0:00:00.393) 0:14:29.268 ******* 2025-02-10 09:33:02.765490 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:33:02.765496 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:33:02.765501 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:33:02.765507 | orchestrator | 2025-02-10 09:33:02.765513 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:33:02.765519 | orchestrator | testbed-node-0 : ok=131  changed=38  unreachable=0 failed=0 skipped=291  rescued=0 ignored=0 2025-02-10 09:33:02.765526 | orchestrator | testbed-node-1 : ok=119  changed=34  unreachable=0 failed=0 skipped=262  rescued=0 ignored=0 2025-02-10 09:33:02.765532 | orchestrator | testbed-node-2 : ok=126  changed=36  unreachable=0 failed=0 skipped=261  rescued=0 ignored=0 2025-02-10 09:33:02.765541 | orchestrator | testbed-node-3 : ok=175  changed=47  unreachable=0 failed=0 skipped=347  rescued=0 ignored=0 2025-02-10 09:33:05.765034 | orchestrator | testbed-node-4 : ok=164  changed=43  unreachable=0 failed=0 skipped=309  rescued=0 ignored=0 2025-02-10 09:33:05.765308 | orchestrator | testbed-node-5 : ok=166  changed=44  unreachable=0 failed=0 skipped=307  rescued=0 ignored=0 2025-02-10 09:33:05.765336 | orchestrator | 2025-02-10 09:33:05.765420 | orchestrator | 2025-02-10 09:33:05.765437 | orchestrator | 2025-02-10 09:33:05.765453 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:33:05.765470 | orchestrator | Monday 10 February 2025 09:32:58 +0000 (0:00:01.552) 0:14:30.821 ******* 2025-02-10 09:33:05.765518 | orchestrator | =============================================================================== 2025-02-10 09:33:05.765533 | orchestrator | ceph-osd : use ceph-volume to create bluestore osds -------------------- 40.17s 2025-02-10 09:33:05.765548 | orchestrator | ceph-container-common : pulling nexus.testbed.osism.xyz:8193/osism/ceph-daemon:17.2.7 image -- 25.96s 2025-02-10 09:33:05.765564 | orchestrator | ceph-rgw : create replicated pools for rgw ----------------------------- 24.23s 2025-02-10 09:33:05.765579 | orchestrator | ceph-mon : waiting for the monitor(s) to form the quorum... ------------ 21.48s 2025-02-10 09:33:05.765593 | orchestrator | ceph-mds : wait for mds socket to exist -------------------------------- 17.15s 2025-02-10 09:33:05.765608 | orchestrator | ceph-mgr : wait for all mgr to be up ----------------------------------- 13.67s 2025-02-10 09:33:05.765623 | orchestrator | ceph-osd : wait for all osd to be up ----------------------------------- 12.61s 2025-02-10 09:33:05.765637 | orchestrator | ceph-mgr : create ceph mgr keyring(s) on a mon node --------------------- 8.51s 2025-02-10 09:33:05.765651 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 8.01s 2025-02-10 09:33:05.765666 | orchestrator | ceph-mon : fetch ceph initial keys -------------------------------------- 7.67s 2025-02-10 09:33:05.765681 | orchestrator | ceph-config : generate ceph.conf configuration file --------------------- 7.50s 2025-02-10 09:33:05.765696 | orchestrator | ceph-config : create ceph initial directories --------------------------- 7.19s 2025-02-10 09:33:05.765710 | orchestrator | ceph-mds : create filesystem pools -------------------------------------- 6.34s 2025-02-10 09:33:05.765724 | orchestrator | ceph-mgr : disable ceph mgr enabled modules ----------------------------- 6.33s 2025-02-10 09:33:05.765739 | orchestrator | ceph-mgr : add modules to ceph-mgr -------------------------------------- 5.53s 2025-02-10 09:33:05.765753 | orchestrator | ceph-handler : remove tempdir for scripts ------------------------------- 4.67s 2025-02-10 09:33:05.765768 | orchestrator | ceph-crash : start the ceph-crash service ------------------------------- 4.43s 2025-02-10 09:33:05.765782 | orchestrator | ceph-osd : apply operating system tuning -------------------------------- 4.31s 2025-02-10 09:33:05.765797 | orchestrator | ceph-facts : find a running mon container ------------------------------- 4.08s 2025-02-10 09:33:05.765836 | orchestrator | ceph-osd : systemd start osd -------------------------------------------- 4.04s 2025-02-10 09:33:05.765853 | orchestrator | 2025-02-10 09:33:02 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:33:05.765868 | orchestrator | 2025-02-10 09:33:02 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:33:05.765904 | orchestrator | 2025-02-10 09:33:05 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:33:08.808222 | orchestrator | 2025-02-10 09:33:05 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:33:08.808377 | orchestrator | 2025-02-10 09:33:05 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:33:08.808421 | orchestrator | 2025-02-10 09:33:08 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:33:08.810113 | orchestrator | 2025-02-10 09:33:08 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:33:11.853493 | orchestrator | 2025-02-10 09:33:08 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:33:11.853650 | orchestrator | 2025-02-10 09:33:11 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:33:11.853789 | orchestrator | 2025-02-10 09:33:11 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:33:14.897518 | orchestrator | 2025-02-10 09:33:11 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:33:14.897684 | orchestrator | 2025-02-10 09:33:14 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:33:17.960595 | orchestrator | 2025-02-10 09:33:14 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:33:17.960762 | orchestrator | 2025-02-10 09:33:14 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:33:17.960802 | orchestrator | 2025-02-10 09:33:17 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state STARTED 2025-02-10 09:33:17.962392 | orchestrator | 2025-02-10 09:33:17 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:33:21.020323 | orchestrator | 2025-02-10 09:33:17 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:33:21.020484 | orchestrator | 2025-02-10 09:33:21.020506 | orchestrator | 2025-02-10 09:33:21.020521 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-02-10 09:33:21.020536 | orchestrator | 2025-02-10 09:33:21.020550 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-02-10 09:33:21.020564 | orchestrator | Monday 10 February 2025 09:29:28 +0000 (0:00:00.179) 0:00:00.179 ******* 2025-02-10 09:33:21.020579 | orchestrator | ok: [localhost] => { 2025-02-10 09:33:21.020595 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-02-10 09:33:21.020609 | orchestrator | } 2025-02-10 09:33:21.020623 | orchestrator | 2025-02-10 09:33:21.020637 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-02-10 09:33:21.020651 | orchestrator | Monday 10 February 2025 09:29:28 +0000 (0:00:00.052) 0:00:00.232 ******* 2025-02-10 09:33:21.020665 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 1, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-02-10 09:33:21.020681 | orchestrator | ...ignoring 2025-02-10 09:33:21.020694 | orchestrator | 2025-02-10 09:33:21.020708 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-02-10 09:33:21.020722 | orchestrator | Monday 10 February 2025 09:29:30 +0000 (0:00:01.627) 0:00:01.860 ******* 2025-02-10 09:33:21.020735 | orchestrator | skipping: [localhost] 2025-02-10 09:33:21.020749 | orchestrator | 2025-02-10 09:33:21.020764 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-02-10 09:33:21.020777 | orchestrator | Monday 10 February 2025 09:29:30 +0000 (0:00:00.065) 0:00:01.926 ******* 2025-02-10 09:33:21.020791 | orchestrator | ok: [localhost] 2025-02-10 09:33:21.020804 | orchestrator | 2025-02-10 09:33:21.020818 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:33:21.020832 | orchestrator | 2025-02-10 09:33:21.020848 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:33:21.020864 | orchestrator | Monday 10 February 2025 09:29:30 +0000 (0:00:00.222) 0:00:02.149 ******* 2025-02-10 09:33:21.020880 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:21.020896 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:21.020911 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:21.020928 | orchestrator | 2025-02-10 09:33:21.020964 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:33:21.020981 | orchestrator | Monday 10 February 2025 09:29:30 +0000 (0:00:00.448) 0:00:02.598 ******* 2025-02-10 09:33:21.020998 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-02-10 09:33:21.021014 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-02-10 09:33:21.021031 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-02-10 09:33:21.021045 | orchestrator | 2025-02-10 09:33:21.021059 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-02-10 09:33:21.021073 | orchestrator | 2025-02-10 09:33:21.021087 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-02-10 09:33:21.021101 | orchestrator | Monday 10 February 2025 09:29:31 +0000 (0:00:00.536) 0:00:03.134 ******* 2025-02-10 09:33:21.021115 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-02-10 09:33:21.021129 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-02-10 09:33:21.021143 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-02-10 09:33:21.021185 | orchestrator | 2025-02-10 09:33:21.021199 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-02-10 09:33:21.021213 | orchestrator | Monday 10 February 2025 09:29:32 +0000 (0:00:00.730) 0:00:03.865 ******* 2025-02-10 09:33:21.021227 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:33:21.021242 | orchestrator | 2025-02-10 09:33:21.021256 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-02-10 09:33:21.021270 | orchestrator | Monday 10 February 2025 09:29:33 +0000 (0:00:00.953) 0:00:04.819 ******* 2025-02-10 09:33:21.021303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-10 09:33:21.021323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-10 09:33:21.021349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-02-10 09:33:21.021374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-10 09:33:21.021391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-02-10 09:33:21.021406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-02-10 09:33:21.021421 | orchestrator | 2025-02-10 09:33:21.021444 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-02-10 09:33:21.021458 | orchestrator | Monday 10 February 2025 09:29:38 +0000 (0:00:05.211) 0:00:10.030 ******* 2025-02-10 09:33:21.021472 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:21.021487 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:21.021500 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:21.021514 | orchestrator | 2025-02-10 09:33:21.021528 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-02-10 09:33:21.021542 | orchestrator | Monday 10 February 2025 09:29:39 +0000 (0:00:01.218) 0:00:11.249 ******* 2025-02-10 09:33:21.021556 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:21.021569 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:21.021583 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:21.021597 | orchestrator | 2025-02-10 09:33:21.021611 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-02-10 09:33:21.021624 | orchestrator | Monday 10 February 2025 09:29:41 +0000 (0:00:01.700) 0:00:12.949 ******* 2025-02-10 09:33:21.021646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-10 09:33:21.021662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-10 09:33:21.021686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-10 09:33:21.021710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-02-10 09:33:21.021726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-02-10 09:33:21.021741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-02-10 09:33:21.021762 | orchestrator | 2025-02-10 09:33:21.021776 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-02-10 09:33:21.021790 | orchestrator | Monday 10 February 2025 09:29:48 +0000 (0:00:07.690) 0:00:20.640 ******* 2025-02-10 09:33:21.021804 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:21.021818 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:21.021831 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:21.021845 | orchestrator | 2025-02-10 09:33:21.021859 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-02-10 09:33:21.021873 | orchestrator | Monday 10 February 2025 09:29:50 +0000 (0:00:01.161) 0:00:21.802 ******* 2025-02-10 09:33:21.021886 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:33:21.021900 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:21.021914 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:33:21.021928 | orchestrator | 2025-02-10 09:33:21.021994 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-02-10 09:33:21.022011 | orchestrator | Monday 10 February 2025 09:29:58 +0000 (0:00:08.594) 0:00:30.397 ******* 2025-02-10 09:33:21.022094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-10 09:33:21.022121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-10 09:33:21.022146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-10 09:33:21.022170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-02-10 09:33:21.022186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-02-10 09:33:21.022207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-02-10 09:33:21.022222 | orchestrator | 2025-02-10 09:33:21.022236 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-02-10 09:33:21.022250 | orchestrator | Monday 10 February 2025 09:30:05 +0000 (0:00:06.983) 0:00:37.380 ******* 2025-02-10 09:33:21.022263 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:21.022277 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:33:21.022291 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:33:21.022305 | orchestrator | 2025-02-10 09:33:21.022319 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-02-10 09:33:21.022332 | orchestrator | Monday 10 February 2025 09:30:06 +0000 (0:00:01.307) 0:00:38.688 ******* 2025-02-10 09:33:21.022346 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:21.022360 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:21.022374 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:21.022394 | orchestrator | 2025-02-10 09:33:21.022408 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-02-10 09:33:21.022422 | orchestrator | Monday 10 February 2025 09:30:07 +0000 (0:00:00.534) 0:00:39.222 ******* 2025-02-10 09:33:21.022436 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:21.022449 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:21.022463 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:21.022477 | orchestrator | 2025-02-10 09:33:21.022490 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-02-10 09:33:21.022504 | orchestrator | Monday 10 February 2025 09:30:07 +0000 (0:00:00.365) 0:00:39.587 ******* 2025-02-10 09:33:21.022519 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-02-10 09:33:21.022533 | orchestrator | ...ignoring 2025-02-10 09:33:21.022547 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-02-10 09:33:21.022561 | orchestrator | ...ignoring 2025-02-10 09:33:21.022580 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-02-10 09:33:21.022594 | orchestrator | ...ignoring 2025-02-10 09:33:21.022608 | orchestrator | 2025-02-10 09:33:21.022622 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-02-10 09:33:21.022635 | orchestrator | Monday 10 February 2025 09:30:19 +0000 (0:00:11.156) 0:00:50.744 ******* 2025-02-10 09:33:21.022649 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:21.022663 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:21.022676 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:21.022690 | orchestrator | 2025-02-10 09:33:21.022703 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-02-10 09:33:21.022717 | orchestrator | Monday 10 February 2025 09:30:19 +0000 (0:00:00.667) 0:00:51.412 ******* 2025-02-10 09:33:21.022730 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:21.022744 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:21.022758 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:21.022772 | orchestrator | 2025-02-10 09:33:21.022785 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-02-10 09:33:21.022799 | orchestrator | Monday 10 February 2025 09:30:20 +0000 (0:00:00.719) 0:00:52.131 ******* 2025-02-10 09:33:21.022819 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:21.022833 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:21.022847 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:21.022860 | orchestrator | 2025-02-10 09:33:21.022874 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-02-10 09:33:21.022888 | orchestrator | Monday 10 February 2025 09:30:20 +0000 (0:00:00.453) 0:00:52.584 ******* 2025-02-10 09:33:21.022902 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:21.022915 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:21.022929 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:21.022994 | orchestrator | 2025-02-10 09:33:21.023011 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-02-10 09:33:21.023032 | orchestrator | Monday 10 February 2025 09:30:21 +0000 (0:00:00.681) 0:00:53.265 ******* 2025-02-10 09:33:21.023046 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:21.023060 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:21.023074 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:21.023088 | orchestrator | 2025-02-10 09:33:21.023102 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-02-10 09:33:21.023116 | orchestrator | Monday 10 February 2025 09:30:22 +0000 (0:00:00.771) 0:00:54.036 ******* 2025-02-10 09:33:21.023130 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:21.023144 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:21.023158 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:21.023172 | orchestrator | 2025-02-10 09:33:21.023186 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-02-10 09:33:21.023200 | orchestrator | Monday 10 February 2025 09:30:22 +0000 (0:00:00.695) 0:00:54.732 ******* 2025-02-10 09:33:21.023214 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:21.023227 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:21.023241 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-02-10 09:33:21.023255 | orchestrator | 2025-02-10 09:33:21.023269 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-02-10 09:33:21.023283 | orchestrator | Monday 10 February 2025 09:30:23 +0000 (0:00:00.572) 0:00:55.305 ******* 2025-02-10 09:33:21.023297 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:21.023311 | orchestrator | 2025-02-10 09:33:21.023324 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-02-10 09:33:21.023338 | orchestrator | Monday 10 February 2025 09:30:36 +0000 (0:00:13.176) 0:01:08.481 ******* 2025-02-10 09:33:21.023351 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:21.023363 | orchestrator | 2025-02-10 09:33:21.023375 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-02-10 09:33:21.023388 | orchestrator | Monday 10 February 2025 09:30:36 +0000 (0:00:00.142) 0:01:08.624 ******* 2025-02-10 09:33:21.023401 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:21.023413 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:21.023425 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:21.023438 | orchestrator | 2025-02-10 09:33:21.023450 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-02-10 09:33:21.023468 | orchestrator | Monday 10 February 2025 09:30:38 +0000 (0:00:01.247) 0:01:09.872 ******* 2025-02-10 09:33:21.023480 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:21.023493 | orchestrator | 2025-02-10 09:33:21.023505 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-02-10 09:33:21.023518 | orchestrator | Monday 10 February 2025 09:30:48 +0000 (0:00:10.604) 0:01:20.477 ******* 2025-02-10 09:33:21.023530 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for first MariaDB service port liveness (10 retries left). 2025-02-10 09:33:21.023542 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:21.023554 | orchestrator | 2025-02-10 09:33:21.023567 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-02-10 09:33:21.023587 | orchestrator | Monday 10 February 2025 09:30:55 +0000 (0:00:07.202) 0:01:27.679 ******* 2025-02-10 09:33:21.023600 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:21.023612 | orchestrator | 2025-02-10 09:33:21.023625 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-02-10 09:33:21.023637 | orchestrator | Monday 10 February 2025 09:30:58 +0000 (0:00:02.972) 0:01:30.652 ******* 2025-02-10 09:33:21.023650 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:21.023662 | orchestrator | 2025-02-10 09:33:21.023674 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-02-10 09:33:21.023687 | orchestrator | Monday 10 February 2025 09:30:59 +0000 (0:00:00.140) 0:01:30.793 ******* 2025-02-10 09:33:21.023699 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:21.023711 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:21.023723 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:21.023736 | orchestrator | 2025-02-10 09:33:21.023748 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-02-10 09:33:21.023760 | orchestrator | Monday 10 February 2025 09:30:59 +0000 (0:00:00.526) 0:01:31.319 ******* 2025-02-10 09:33:21.023772 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:21.023785 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:33:21.023797 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:33:21.023809 | orchestrator | 2025-02-10 09:33:21.023821 | orchestrator | RUNNING HANDLER [mariadb : Restart mariadb-clustercheck container] ************* 2025-02-10 09:33:21.023834 | orchestrator | Monday 10 February 2025 09:31:00 +0000 (0:00:00.783) 0:01:32.103 ******* 2025-02-10 09:33:21.023846 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-02-10 09:33:21.023858 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:21.023870 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:33:21.023882 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:33:21.023895 | orchestrator | 2025-02-10 09:33:21.023907 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-02-10 09:33:21.023919 | orchestrator | skipping: no hosts matched 2025-02-10 09:33:21.023932 | orchestrator | 2025-02-10 09:33:21.023958 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-02-10 09:33:21.023971 | orchestrator | 2025-02-10 09:33:21.023983 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-02-10 09:33:21.023995 | orchestrator | Monday 10 February 2025 09:31:17 +0000 (0:00:16.853) 0:01:48.956 ******* 2025-02-10 09:33:21.024008 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:33:21.024020 | orchestrator | 2025-02-10 09:33:21.024033 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-02-10 09:33:21.024045 | orchestrator | Monday 10 February 2025 09:31:39 +0000 (0:00:22.510) 0:02:11.467 ******* 2025-02-10 09:33:21.024057 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:21.024069 | orchestrator | 2025-02-10 09:33:21.024081 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-02-10 09:33:21.024094 | orchestrator | Monday 10 February 2025 09:31:55 +0000 (0:00:15.623) 0:02:27.090 ******* 2025-02-10 09:33:21.024106 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:21.024118 | orchestrator | 2025-02-10 09:33:21.024131 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-02-10 09:33:21.024143 | orchestrator | 2025-02-10 09:33:21.024161 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-02-10 09:33:21.024174 | orchestrator | Monday 10 February 2025 09:31:58 +0000 (0:00:02.915) 0:02:30.006 ******* 2025-02-10 09:33:21.024186 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:33:21.024198 | orchestrator | 2025-02-10 09:33:21.024211 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-02-10 09:33:21.024223 | orchestrator | Monday 10 February 2025 09:32:14 +0000 (0:00:16.668) 0:02:46.675 ******* 2025-02-10 09:33:21.024235 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:21.024248 | orchestrator | 2025-02-10 09:33:21.024260 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-02-10 09:33:21.024279 | orchestrator | Monday 10 February 2025 09:32:35 +0000 (0:00:20.651) 0:03:07.327 ******* 2025-02-10 09:33:21.024292 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:21.024304 | orchestrator | 2025-02-10 09:33:21.024317 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-02-10 09:33:21.024329 | orchestrator | 2025-02-10 09:33:21.024341 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-02-10 09:33:21.024364 | orchestrator | Monday 10 February 2025 09:32:38 +0000 (0:00:02.837) 0:03:10.164 ******* 2025-02-10 09:33:21.024377 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:21.024389 | orchestrator | 2025-02-10 09:33:21.024402 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-02-10 09:33:21.024414 | orchestrator | Monday 10 February 2025 09:32:54 +0000 (0:00:16.160) 0:03:26.324 ******* 2025-02-10 09:33:21.024426 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:21.024439 | orchestrator | 2025-02-10 09:33:21.024451 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-02-10 09:33:21.024463 | orchestrator | Monday 10 February 2025 09:32:59 +0000 (0:00:04.594) 0:03:30.918 ******* 2025-02-10 09:33:21.024475 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:21.024488 | orchestrator | 2025-02-10 09:33:21.024501 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-02-10 09:33:21.024514 | orchestrator | 2025-02-10 09:33:21.024526 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-02-10 09:33:21.024539 | orchestrator | Monday 10 February 2025 09:33:02 +0000 (0:00:03.261) 0:03:34.180 ******* 2025-02-10 09:33:21.024551 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:33:21.024563 | orchestrator | 2025-02-10 09:33:21.024575 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-02-10 09:33:21.024588 | orchestrator | Monday 10 February 2025 09:33:03 +0000 (0:00:00.875) 0:03:35.056 ******* 2025-02-10 09:33:21.024600 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:21.024620 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:21.024634 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:21.024647 | orchestrator | 2025-02-10 09:33:21.024659 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-02-10 09:33:21.024672 | orchestrator | Monday 10 February 2025 09:33:06 +0000 (0:00:03.013) 0:03:38.069 ******* 2025-02-10 09:33:21.024684 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:21.024697 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:21.024709 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:21.024721 | orchestrator | 2025-02-10 09:33:21.024734 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-02-10 09:33:21.024746 | orchestrator | Monday 10 February 2025 09:33:08 +0000 (0:00:02.403) 0:03:40.473 ******* 2025-02-10 09:33:21.024759 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:21.024771 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:21.024783 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:21.024796 | orchestrator | 2025-02-10 09:33:21.024808 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-02-10 09:33:21.024820 | orchestrator | Monday 10 February 2025 09:33:11 +0000 (0:00:02.787) 0:03:43.261 ******* 2025-02-10 09:33:21.024833 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:21.024845 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:21.024857 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:21.024870 | orchestrator | 2025-02-10 09:33:21.024882 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-02-10 09:33:21.024894 | orchestrator | Monday 10 February 2025 09:33:13 +0000 (0:00:02.413) 0:03:45.674 ******* 2025-02-10 09:33:21.024907 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:21.024919 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:21.024931 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:21.024982 | orchestrator | 2025-02-10 09:33:21.025003 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-02-10 09:33:21.025015 | orchestrator | Monday 10 February 2025 09:33:18 +0000 (0:00:04.887) 0:03:50.562 ******* 2025-02-10 09:33:21.025028 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:21.025040 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:21.025052 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:21.025065 | orchestrator | 2025-02-10 09:33:21.025077 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:33:21.025090 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-02-10 09:33:21.025102 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=8  rescued=0 ignored=1  2025-02-10 09:33:21.025116 | orchestrator | testbed-node-1 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-02-10 09:33:21.025129 | orchestrator | testbed-node-2 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-02-10 09:33:21.025142 | orchestrator | 2025-02-10 09:33:21.025154 | orchestrator | 2025-02-10 09:33:21.025167 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:33:21.025185 | orchestrator | Monday 10 February 2025 09:33:19 +0000 (0:00:00.435) 0:03:50.998 ******* 2025-02-10 09:33:24.074219 | orchestrator | =============================================================================== 2025-02-10 09:33:24.074354 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 39.18s 2025-02-10 09:33:24.074375 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 36.28s 2025-02-10 09:33:24.074391 | orchestrator | mariadb : Restart mariadb-clustercheck container ----------------------- 16.85s 2025-02-10 09:33:24.074407 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 16.16s 2025-02-10 09:33:24.074422 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 13.18s 2025-02-10 09:33:24.074436 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.16s 2025-02-10 09:33:24.074451 | orchestrator | mariadb : Starting first MariaDB container ----------------------------- 10.60s 2025-02-10 09:33:24.074487 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 8.59s 2025-02-10 09:33:24.074502 | orchestrator | mariadb : Copying over config.json files for services ------------------- 7.69s 2025-02-10 09:33:24.074516 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 7.20s 2025-02-10 09:33:24.074530 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 6.98s 2025-02-10 09:33:24.074544 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.75s 2025-02-10 09:33:24.074558 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 5.21s 2025-02-10 09:33:24.074572 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 4.89s 2025-02-10 09:33:24.074586 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.59s 2025-02-10 09:33:24.074600 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 3.26s 2025-02-10 09:33:24.074614 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 3.01s 2025-02-10 09:33:24.074628 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.97s 2025-02-10 09:33:24.074642 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.79s 2025-02-10 09:33:24.074659 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.41s 2025-02-10 09:33:24.074675 | orchestrator | 2025-02-10 09:33:21 | INFO  | Task caa32d92-fc79-4d10-8a4a-5329e6ee3395 is in state SUCCESS 2025-02-10 09:33:24.074692 | orchestrator | 2025-02-10 09:33:21 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:33:24.074734 | orchestrator | 2025-02-10 09:33:21 | INFO  | Task 63ac84ee-662b-439a-a324-d1e493cf6ed3 is in state STARTED 2025-02-10 09:33:24.074753 | orchestrator | 2025-02-10 09:33:21 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:33:24.074769 | orchestrator | 2025-02-10 09:33:21 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:33:24.074805 | orchestrator | 2025-02-10 09:33:24 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:33:24.077388 | orchestrator | 2025-02-10 09:33:24 | INFO  | Task 63ac84ee-662b-439a-a324-d1e493cf6ed3 is in state STARTED 2025-02-10 09:33:24.081146 | orchestrator | 2025-02-10 09:33:24 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:33:27.122509 | orchestrator | 2025-02-10 09:33:24 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:33:27.122664 | orchestrator | 2025-02-10 09:33:27 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:33:27.123002 | orchestrator | 2025-02-10 09:33:27 | INFO  | Task 63ac84ee-662b-439a-a324-d1e493cf6ed3 is in state STARTED 2025-02-10 09:33:27.123478 | orchestrator | 2025-02-10 09:33:27 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:33:30.170850 | orchestrator | 2025-02-10 09:33:27 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:33:30.171151 | orchestrator | 2025-02-10 09:33:30 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:33:30.172686 | orchestrator | 2025-02-10 09:33:30 | INFO  | Task 63ac84ee-662b-439a-a324-d1e493cf6ed3 is in state STARTED 2025-02-10 09:33:30.174385 | orchestrator | 2025-02-10 09:33:30 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:33:30.181891 | orchestrator | 2025-02-10 09:33:30 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:33:33.233640 | orchestrator | 2025-02-10 09:33:33 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:33:36.293876 | orchestrator | 2025-02-10 09:33:33 | INFO  | Task 63ac84ee-662b-439a-a324-d1e493cf6ed3 is in state STARTED 2025-02-10 09:33:36.294300 | orchestrator | 2025-02-10 09:33:33 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:33:36.294328 | orchestrator | 2025-02-10 09:33:33 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:33:36.294363 | orchestrator | 2025-02-10 09:33:36 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:33:39.337767 | orchestrator | 2025-02-10 09:33:36 | INFO  | Task 63ac84ee-662b-439a-a324-d1e493cf6ed3 is in state STARTED 2025-02-10 09:33:39.337921 | orchestrator | 2025-02-10 09:33:36 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:33:39.337943 | orchestrator | 2025-02-10 09:33:36 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:33:39.338103 | orchestrator | 2025-02-10 09:33:39 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:33:39.342545 | orchestrator | 2025-02-10 09:33:39 | INFO  | Task 63ac84ee-662b-439a-a324-d1e493cf6ed3 is in state STARTED 2025-02-10 09:33:39.342614 | orchestrator | 2025-02-10 09:33:39 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:33:42.393529 | orchestrator | 2025-02-10 09:33:39 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:33:42.393701 | orchestrator | 2025-02-10 09:33:42 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:33:42.394389 | orchestrator | 2025-02-10 09:33:42 | INFO  | Task 63ac84ee-662b-439a-a324-d1e493cf6ed3 is in state STARTED 2025-02-10 09:33:42.394532 | orchestrator | 2025-02-10 09:33:42 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:33:45.448414 | orchestrator | 2025-02-10 09:33:42 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:33:45.448578 | orchestrator | 2025-02-10 09:33:45 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:33:45.449175 | orchestrator | 2025-02-10 09:33:45 | INFO  | Task 63ac84ee-662b-439a-a324-d1e493cf6ed3 is in state STARTED 2025-02-10 09:33:45.449216 | orchestrator | 2025-02-10 09:33:45 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:33:48.494672 | orchestrator | 2025-02-10 09:33:45 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:33:48.494807 | orchestrator | 2025-02-10 09:33:48 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:33:48.495143 | orchestrator | 2025-02-10 09:33:48 | INFO  | Task 63ac84ee-662b-439a-a324-d1e493cf6ed3 is in state STARTED 2025-02-10 09:33:48.496386 | orchestrator | 2025-02-10 09:33:48 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:33:51.561525 | orchestrator | 2025-02-10 09:33:48 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:33:51.561689 | orchestrator | 2025-02-10 09:33:51 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:33:51.563246 | orchestrator | 2025-02-10 09:33:51 | INFO  | Task 63ac84ee-662b-439a-a324-d1e493cf6ed3 is in state STARTED 2025-02-10 09:33:51.565059 | orchestrator | 2025-02-10 09:33:51 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:33:54.607324 | orchestrator | 2025-02-10 09:33:51 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:33:54.607479 | orchestrator | 2025-02-10 09:33:54 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:33:54.608476 | orchestrator | 2025-02-10 09:33:54 | INFO  | Task 63ac84ee-662b-439a-a324-d1e493cf6ed3 is in state STARTED 2025-02-10 09:33:54.608513 | orchestrator | 2025-02-10 09:33:54 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:33:57.649322 | orchestrator | 2025-02-10 09:33:54 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:33:57.649489 | orchestrator | 2025-02-10 09:33:57 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:33:57.649803 | orchestrator | 2025-02-10 09:33:57 | INFO  | Task 63ac84ee-662b-439a-a324-d1e493cf6ed3 is in state STARTED 2025-02-10 09:33:57.650836 | orchestrator | 2025-02-10 09:33:57 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:33:57.651110 | orchestrator | 2025-02-10 09:33:57 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:00.698000 | orchestrator | 2025-02-10 09:34:00 | INFO  | Task a8530c54-1f30-43ae-8168-ba40b144878a is in state STARTED 2025-02-10 09:34:00.698727 | orchestrator | 2025-02-10 09:34:00 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:34:00.700091 | orchestrator | 2025-02-10 09:34:00 | INFO  | Task 63ac84ee-662b-439a-a324-d1e493cf6ed3 is in state STARTED 2025-02-10 09:34:00.701192 | orchestrator | 2025-02-10 09:34:00 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:34:03.751781 | orchestrator | 2025-02-10 09:34:00 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:03.751933 | orchestrator | 2025-02-10 09:34:03 | INFO  | Task a8530c54-1f30-43ae-8168-ba40b144878a is in state STARTED 2025-02-10 09:34:03.752189 | orchestrator | 2025-02-10 09:34:03 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:34:03.754278 | orchestrator | 2025-02-10 09:34:03 | INFO  | Task 63ac84ee-662b-439a-a324-d1e493cf6ed3 is in state STARTED 2025-02-10 09:34:03.755493 | orchestrator | 2025-02-10 09:34:03 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:34:03.755887 | orchestrator | 2025-02-10 09:34:03 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:06.810659 | orchestrator | 2025-02-10 09:34:06 | INFO  | Task a8530c54-1f30-43ae-8168-ba40b144878a is in state STARTED 2025-02-10 09:34:06.811006 | orchestrator | 2025-02-10 09:34:06 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:34:06.812796 | orchestrator | 2025-02-10 09:34:06 | INFO  | Task 63ac84ee-662b-439a-a324-d1e493cf6ed3 is in state STARTED 2025-02-10 09:34:06.814130 | orchestrator | 2025-02-10 09:34:06 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:34:09.883643 | orchestrator | 2025-02-10 09:34:06 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:09.883840 | orchestrator | 2025-02-10 09:34:09 | INFO  | Task a8530c54-1f30-43ae-8168-ba40b144878a is in state STARTED 2025-02-10 09:34:09.885524 | orchestrator | 2025-02-10 09:34:09 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:34:09.888132 | orchestrator | 2025-02-10 09:34:09 | INFO  | Task 63ac84ee-662b-439a-a324-d1e493cf6ed3 is in state STARTED 2025-02-10 09:34:09.889760 | orchestrator | 2025-02-10 09:34:09 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:34:12.939378 | orchestrator | 2025-02-10 09:34:09 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:12.939580 | orchestrator | 2025-02-10 09:34:12 | INFO  | Task a8530c54-1f30-43ae-8168-ba40b144878a is in state STARTED 2025-02-10 09:34:12.941620 | orchestrator | 2025-02-10 09:34:12 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:34:12.944244 | orchestrator | 2025-02-10 09:34:12 | INFO  | Task 63ac84ee-662b-439a-a324-d1e493cf6ed3 is in state STARTED 2025-02-10 09:34:12.945075 | orchestrator | 2025-02-10 09:34:12 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:34:12.945677 | orchestrator | 2025-02-10 09:34:12 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:16.004472 | orchestrator | 2025-02-10 09:34:16 | INFO  | Task a8530c54-1f30-43ae-8168-ba40b144878a is in state SUCCESS 2025-02-10 09:34:16.005236 | orchestrator | 2025-02-10 09:34:16 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:34:16.005283 | orchestrator | 2025-02-10 09:34:16 | INFO  | Task 63ac84ee-662b-439a-a324-d1e493cf6ed3 is in state STARTED 2025-02-10 09:34:16.005799 | orchestrator | 2025-02-10 09:34:16 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:34:19.053851 | orchestrator | 2025-02-10 09:34:16 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:19.054143 | orchestrator | 2025-02-10 09:34:19 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:34:19.054664 | orchestrator | 2025-02-10 09:34:19 | INFO  | Task 63ac84ee-662b-439a-a324-d1e493cf6ed3 is in state STARTED 2025-02-10 09:34:19.056328 | orchestrator | 2025-02-10 09:34:19 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:34:19.056828 | orchestrator | 2025-02-10 09:34:19 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:22.104789 | orchestrator | 2025-02-10 09:34:22 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:34:22.106722 | orchestrator | 2025-02-10 09:34:22 | INFO  | Task 63ac84ee-662b-439a-a324-d1e493cf6ed3 is in state STARTED 2025-02-10 09:34:25.153867 | orchestrator | 2025-02-10 09:34:22 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:34:25.154235 | orchestrator | 2025-02-10 09:34:22 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:25.154319 | orchestrator | 2025-02-10 09:34:25 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:34:25.155173 | orchestrator | 2025-02-10 09:34:25 | INFO  | Task 63ac84ee-662b-439a-a324-d1e493cf6ed3 is in state STARTED 2025-02-10 09:34:25.155214 | orchestrator | 2025-02-10 09:34:25 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:34:28.198318 | orchestrator | 2025-02-10 09:34:25 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:28.198462 | orchestrator | 2025-02-10 09:34:28 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:34:28.199366 | orchestrator | 2025-02-10 09:34:28 | INFO  | Task 63ac84ee-662b-439a-a324-d1e493cf6ed3 is in state STARTED 2025-02-10 09:34:28.201052 | orchestrator | 2025-02-10 09:34:28 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:34:31.244560 | orchestrator | 2025-02-10 09:34:28 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:31.244721 | orchestrator | 2025-02-10 09:34:31 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:34:31.245251 | orchestrator | 2025-02-10 09:34:31 | INFO  | Task 63ac84ee-662b-439a-a324-d1e493cf6ed3 is in state STARTED 2025-02-10 09:34:31.247128 | orchestrator | 2025-02-10 09:34:31 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:34:34.290681 | orchestrator | 2025-02-10 09:34:31 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:34.291068 | orchestrator | 2025-02-10 09:34:34 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:34:37.393702 | orchestrator | 2025-02-10 09:34:34 | INFO  | Task 63ac84ee-662b-439a-a324-d1e493cf6ed3 is in state STARTED 2025-02-10 09:34:37.393832 | orchestrator | 2025-02-10 09:34:34 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:34:37.393852 | orchestrator | 2025-02-10 09:34:34 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:37.393887 | orchestrator | 2025-02-10 09:34:37 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:34:37.394419 | orchestrator | 2025-02-10 09:34:37 | INFO  | Task 63ac84ee-662b-439a-a324-d1e493cf6ed3 is in state STARTED 2025-02-10 09:34:37.396267 | orchestrator | 2025-02-10 09:34:37 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:34:40.439458 | orchestrator | 2025-02-10 09:34:37 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:40.439604 | orchestrator | 2025-02-10 09:34:40 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:34:40.440167 | orchestrator | 2025-02-10 09:34:40 | INFO  | Task 63ac84ee-662b-439a-a324-d1e493cf6ed3 is in state STARTED 2025-02-10 09:34:40.441086 | orchestrator | 2025-02-10 09:34:40 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:34:43.483738 | orchestrator | 2025-02-10 09:34:40 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:43.483876 | orchestrator | 2025-02-10 09:34:43 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:34:43.486377 | orchestrator | 2025-02-10 09:34:43 | INFO  | Task 63ac84ee-662b-439a-a324-d1e493cf6ed3 is in state STARTED 2025-02-10 09:34:43.491607 | orchestrator | 2025-02-10 09:34:43 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:34:46.531760 | orchestrator | 2025-02-10 09:34:43 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:46.532071 | orchestrator | 2025-02-10 09:34:46 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:34:46.532836 | orchestrator | 2025-02-10 09:34:46 | INFO  | Task 63ac84ee-662b-439a-a324-d1e493cf6ed3 is in state STARTED 2025-02-10 09:34:46.532872 | orchestrator | 2025-02-10 09:34:46 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:34:46.533444 | orchestrator | 2025-02-10 09:34:46 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:49.577046 | orchestrator | 2025-02-10 09:34:49 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:34:49.579612 | orchestrator | 2025-02-10 09:34:49 | INFO  | Task 63ac84ee-662b-439a-a324-d1e493cf6ed3 is in state STARTED 2025-02-10 09:34:49.581667 | orchestrator | 2025-02-10 09:34:49 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:34:52.622278 | orchestrator | 2025-02-10 09:34:49 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:52.622465 | orchestrator | 2025-02-10 09:34:52 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:34:55.671486 | orchestrator | 2025-02-10 09:34:52 | INFO  | Task 63ac84ee-662b-439a-a324-d1e493cf6ed3 is in state STARTED 2025-02-10 09:34:55.671606 | orchestrator | 2025-02-10 09:34:52 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:34:55.671623 | orchestrator | 2025-02-10 09:34:52 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:55.671658 | orchestrator | 2025-02-10 09:34:55 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:34:55.672562 | orchestrator | 2025-02-10 09:34:55 | INFO  | Task 63ac84ee-662b-439a-a324-d1e493cf6ed3 is in state STARTED 2025-02-10 09:34:55.672617 | orchestrator | 2025-02-10 09:34:55 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:34:58.709955 | orchestrator | 2025-02-10 09:34:55 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:58.710263 | orchestrator | 2025-02-10 09:34:58 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:34:58.714397 | orchestrator | 2025-02-10 09:34:58 | INFO  | Task 63ac84ee-662b-439a-a324-d1e493cf6ed3 is in state STARTED 2025-02-10 09:34:58.714456 | orchestrator | 2025-02-10 09:34:58 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:35:01.760464 | orchestrator | 2025-02-10 09:34:58 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:01.760619 | orchestrator | 2025-02-10 09:35:01 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:35:01.762170 | orchestrator | 2025-02-10 09:35:01 | INFO  | Task 63ac84ee-662b-439a-a324-d1e493cf6ed3 is in state STARTED 2025-02-10 09:35:01.764319 | orchestrator | 2025-02-10 09:35:01 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:35:04.808130 | orchestrator | 2025-02-10 09:35:01 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:04.808278 | orchestrator | 2025-02-10 09:35:04 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:35:04.810494 | orchestrator | 2025-02-10 09:35:04 | INFO  | Task 63ac84ee-662b-439a-a324-d1e493cf6ed3 is in state STARTED 2025-02-10 09:35:04.810564 | orchestrator | 2025-02-10 09:35:04 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:35:07.851263 | orchestrator | 2025-02-10 09:35:04 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:07.851418 | orchestrator | 2025-02-10 09:35:07 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:35:07.852380 | orchestrator | 2025-02-10 09:35:07 | INFO  | Task 63ac84ee-662b-439a-a324-d1e493cf6ed3 is in state STARTED 2025-02-10 09:35:07.854648 | orchestrator | 2025-02-10 09:35:07 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:35:10.903848 | orchestrator | 2025-02-10 09:35:07 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:10.904073 | orchestrator | 2025-02-10 09:35:10 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:35:10.905330 | orchestrator | 2025-02-10 09:35:10 | INFO  | Task 63ac84ee-662b-439a-a324-d1e493cf6ed3 is in state SUCCESS 2025-02-10 09:35:10.906854 | orchestrator | 2025-02-10 09:35:10.906943 | orchestrator | None 2025-02-10 09:35:10.906961 | orchestrator | 2025-02-10 09:35:10.906975 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:35:10.907031 | orchestrator | 2025-02-10 09:35:10.907045 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:35:10.907057 | orchestrator | Monday 10 February 2025 09:33:23 +0000 (0:00:00.339) 0:00:00.339 ******* 2025-02-10 09:35:10.907070 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:35:10.907084 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:35:10.907096 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:35:10.907109 | orchestrator | 2025-02-10 09:35:10.907121 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:35:10.907134 | orchestrator | Monday 10 February 2025 09:33:24 +0000 (0:00:00.472) 0:00:00.812 ******* 2025-02-10 09:35:10.907147 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-02-10 09:35:10.907159 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-02-10 09:35:10.907172 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-02-10 09:35:10.907185 | orchestrator | 2025-02-10 09:35:10.907197 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-02-10 09:35:10.907209 | orchestrator | 2025-02-10 09:35:10.907222 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-02-10 09:35:10.907235 | orchestrator | Monday 10 February 2025 09:33:24 +0000 (0:00:00.390) 0:00:01.203 ******* 2025-02-10 09:35:10.907248 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:35:10.907262 | orchestrator | 2025-02-10 09:35:10.907690 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-02-10 09:35:10.907719 | orchestrator | Monday 10 February 2025 09:33:25 +0000 (0:00:01.107) 0:00:02.310 ******* 2025-02-10 09:35:10.907740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-10 09:35:10.907805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-10 09:35:10.907823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-10 09:35:10.907847 | orchestrator | 2025-02-10 09:35:10.907861 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-02-10 09:35:10.907874 | orchestrator | Monday 10 February 2025 09:33:27 +0000 (0:00:01.991) 0:00:04.302 ******* 2025-02-10 09:35:10.907886 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:35:10.907900 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:35:10.907912 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:35:10.907925 | orchestrator | 2025-02-10 09:35:10.907937 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-02-10 09:35:10.907958 | orchestrator | Monday 10 February 2025 09:33:28 +0000 (0:00:00.366) 0:00:04.668 ******* 2025-02-10 09:35:10.907971 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-02-10 09:35:10.908094 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-02-10 09:35:10.908115 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-02-10 09:35:10.908136 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-02-10 09:35:10.908157 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-02-10 09:35:10.908198 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-02-10 09:35:10.908213 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-02-10 09:35:10.908227 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-02-10 09:35:10.908240 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-02-10 09:35:10.908253 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-02-10 09:35:10.908265 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-02-10 09:35:10.908278 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-02-10 09:35:10.908291 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-02-10 09:35:10.908304 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-02-10 09:35:10.908316 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-02-10 09:35:10.908344 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-02-10 09:35:10.908357 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-02-10 09:35:10.908369 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-02-10 09:35:10.908383 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-02-10 09:35:10.908399 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-02-10 09:35:10.908413 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-02-10 09:35:10.908425 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-02-10 09:35:10.908439 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'heat', 'enabled': True}) 2025-02-10 09:35:10.908453 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ironic', 'enabled': True}) 2025-02-10 09:35:10.908466 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-02-10 09:35:10.908478 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-02-10 09:35:10.908491 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-02-10 09:35:10.908504 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-02-10 09:35:10.908517 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-02-10 09:35:10.908529 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-02-10 09:35:10.908542 | orchestrator | 2025-02-10 09:35:10.908554 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-10 09:35:10.908567 | orchestrator | Monday 10 February 2025 09:33:29 +0000 (0:00:01.174) 0:00:05.843 ******* 2025-02-10 09:35:10.908579 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:35:10.908592 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:35:10.908605 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:35:10.908618 | orchestrator | 2025-02-10 09:35:10.908630 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-10 09:35:10.908643 | orchestrator | Monday 10 February 2025 09:33:29 +0000 (0:00:00.611) 0:00:06.454 ******* 2025-02-10 09:35:10.908656 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:35:10.908669 | orchestrator | 2025-02-10 09:35:10.908682 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-10 09:35:10.908704 | orchestrator | Monday 10 February 2025 09:33:29 +0000 (0:00:00.133) 0:00:06.588 ******* 2025-02-10 09:35:10.908718 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:35:10.908732 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:35:10.908744 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:35:10.908757 | orchestrator | 2025-02-10 09:35:10.908770 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-10 09:35:10.908782 | orchestrator | Monday 10 February 2025 09:33:30 +0000 (0:00:00.511) 0:00:07.099 ******* 2025-02-10 09:35:10.908824 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:35:10.908838 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:35:10.908850 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:35:10.908862 | orchestrator | 2025-02-10 09:35:10.908875 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-10 09:35:10.908888 | orchestrator | Monday 10 February 2025 09:33:30 +0000 (0:00:00.369) 0:00:07.468 ******* 2025-02-10 09:35:10.908901 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:35:10.908914 | orchestrator | 2025-02-10 09:35:10.908926 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-10 09:35:10.908939 | orchestrator | Monday 10 February 2025 09:33:31 +0000 (0:00:00.149) 0:00:07.618 ******* 2025-02-10 09:35:10.908951 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:35:10.908963 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:35:10.908976 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:35:10.909016 | orchestrator | 2025-02-10 09:35:10.909030 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-10 09:35:10.909043 | orchestrator | Monday 10 February 2025 09:33:31 +0000 (0:00:00.606) 0:00:08.225 ******* 2025-02-10 09:35:10.909056 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:35:10.909068 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:35:10.909081 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:35:10.909093 | orchestrator | 2025-02-10 09:35:10.909107 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-10 09:35:10.909119 | orchestrator | Monday 10 February 2025 09:33:32 +0000 (0:00:00.630) 0:00:08.855 ******* 2025-02-10 09:35:10.909132 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:35:10.909145 | orchestrator | 2025-02-10 09:35:10.909157 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-10 09:35:10.909170 | orchestrator | Monday 10 February 2025 09:33:32 +0000 (0:00:00.143) 0:00:08.999 ******* 2025-02-10 09:35:10.909183 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:35:10.909196 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:35:10.909209 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:35:10.909222 | orchestrator | 2025-02-10 09:35:10.909235 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-10 09:35:10.909255 | orchestrator | Monday 10 February 2025 09:33:32 +0000 (0:00:00.459) 0:00:09.458 ******* 2025-02-10 09:35:10.909268 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:35:10.909281 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:35:10.909294 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:35:10.909307 | orchestrator | 2025-02-10 09:35:10.909319 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-10 09:35:10.909332 | orchestrator | Monday 10 February 2025 09:33:33 +0000 (0:00:00.592) 0:00:10.051 ******* 2025-02-10 09:35:10.909346 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:35:10.909359 | orchestrator | 2025-02-10 09:35:10.909372 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-10 09:35:10.909385 | orchestrator | Monday 10 February 2025 09:33:33 +0000 (0:00:00.183) 0:00:10.234 ******* 2025-02-10 09:35:10.909397 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:35:10.909410 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:35:10.909429 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:35:10.909441 | orchestrator | 2025-02-10 09:35:10.909454 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-10 09:35:10.909466 | orchestrator | Monday 10 February 2025 09:33:34 +0000 (0:00:00.514) 0:00:10.749 ******* 2025-02-10 09:35:10.909478 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:35:10.909492 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:35:10.909505 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:35:10.909517 | orchestrator | 2025-02-10 09:35:10.909530 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-10 09:35:10.909542 | orchestrator | Monday 10 February 2025 09:33:34 +0000 (0:00:00.373) 0:00:11.122 ******* 2025-02-10 09:35:10.909554 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:35:10.909575 | orchestrator | 2025-02-10 09:35:10.909587 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-10 09:35:10.909600 | orchestrator | Monday 10 February 2025 09:33:35 +0000 (0:00:00.592) 0:00:11.714 ******* 2025-02-10 09:35:10.909613 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:35:10.909626 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:35:10.909639 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:35:10.909651 | orchestrator | 2025-02-10 09:35:10.909664 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-10 09:35:10.909677 | orchestrator | Monday 10 February 2025 09:33:35 +0000 (0:00:00.592) 0:00:12.307 ******* 2025-02-10 09:35:10.909689 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:35:10.909701 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:35:10.909714 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:35:10.909727 | orchestrator | 2025-02-10 09:35:10.909739 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-10 09:35:10.909752 | orchestrator | Monday 10 February 2025 09:33:36 +0000 (0:00:00.631) 0:00:12.938 ******* 2025-02-10 09:35:10.909764 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:35:10.909777 | orchestrator | 2025-02-10 09:35:10.909789 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-10 09:35:10.909801 | orchestrator | Monday 10 February 2025 09:33:36 +0000 (0:00:00.145) 0:00:13.084 ******* 2025-02-10 09:35:10.909814 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:35:10.909827 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:35:10.909840 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:35:10.909852 | orchestrator | 2025-02-10 09:35:10.909864 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-10 09:35:10.909877 | orchestrator | Monday 10 February 2025 09:33:37 +0000 (0:00:00.529) 0:00:13.614 ******* 2025-02-10 09:35:10.909889 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:35:10.909902 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:35:10.909922 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:35:10.909936 | orchestrator | 2025-02-10 09:35:10.909950 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-10 09:35:10.909962 | orchestrator | Monday 10 February 2025 09:33:37 +0000 (0:00:00.534) 0:00:14.148 ******* 2025-02-10 09:35:10.909975 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:35:10.910076 | orchestrator | 2025-02-10 09:35:10.910095 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-10 09:35:10.910108 | orchestrator | Monday 10 February 2025 09:33:37 +0000 (0:00:00.212) 0:00:14.360 ******* 2025-02-10 09:35:10.910122 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:35:10.910134 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:35:10.910147 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:35:10.910160 | orchestrator | 2025-02-10 09:35:10.910173 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-10 09:35:10.910185 | orchestrator | Monday 10 February 2025 09:33:38 +0000 (0:00:00.542) 0:00:14.903 ******* 2025-02-10 09:35:10.910198 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:35:10.910211 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:35:10.910224 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:35:10.910236 | orchestrator | 2025-02-10 09:35:10.910249 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-10 09:35:10.910269 | orchestrator | Monday 10 February 2025 09:33:38 +0000 (0:00:00.338) 0:00:15.241 ******* 2025-02-10 09:35:10.910283 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:35:10.910296 | orchestrator | 2025-02-10 09:35:10.910309 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-10 09:35:10.910322 | orchestrator | Monday 10 February 2025 09:33:38 +0000 (0:00:00.297) 0:00:15.539 ******* 2025-02-10 09:35:10.910335 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:35:10.910349 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:35:10.910361 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:35:10.910374 | orchestrator | 2025-02-10 09:35:10.910394 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-10 09:35:10.910406 | orchestrator | Monday 10 February 2025 09:33:39 +0000 (0:00:00.338) 0:00:15.877 ******* 2025-02-10 09:35:10.910419 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:35:10.910431 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:35:10.910443 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:35:10.910456 | orchestrator | 2025-02-10 09:35:10.910473 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-10 09:35:10.910486 | orchestrator | Monday 10 February 2025 09:33:39 +0000 (0:00:00.529) 0:00:16.407 ******* 2025-02-10 09:35:10.910499 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:35:10.910511 | orchestrator | 2025-02-10 09:35:10.910523 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-10 09:35:10.910536 | orchestrator | Monday 10 February 2025 09:33:39 +0000 (0:00:00.151) 0:00:16.558 ******* 2025-02-10 09:35:10.910548 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:35:10.910561 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:35:10.910573 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:35:10.910586 | orchestrator | 2025-02-10 09:35:10.910599 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-10 09:35:10.910611 | orchestrator | Monday 10 February 2025 09:33:40 +0000 (0:00:00.605) 0:00:17.164 ******* 2025-02-10 09:35:10.910623 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:35:10.910636 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:35:10.910649 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:35:10.910662 | orchestrator | 2025-02-10 09:35:10.910674 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-10 09:35:10.910687 | orchestrator | Monday 10 February 2025 09:33:41 +0000 (0:00:00.520) 0:00:17.685 ******* 2025-02-10 09:35:10.910699 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:35:10.910712 | orchestrator | 2025-02-10 09:35:10.910725 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-10 09:35:10.910737 | orchestrator | Monday 10 February 2025 09:33:41 +0000 (0:00:00.137) 0:00:17.822 ******* 2025-02-10 09:35:10.910749 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:35:10.910762 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:35:10.910774 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:35:10.910786 | orchestrator | 2025-02-10 09:35:10.910799 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-10 09:35:10.910812 | orchestrator | Monday 10 February 2025 09:33:41 +0000 (0:00:00.688) 0:00:18.511 ******* 2025-02-10 09:35:10.910825 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:35:10.910837 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:35:10.910850 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:35:10.910862 | orchestrator | 2025-02-10 09:35:10.910874 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-10 09:35:10.910888 | orchestrator | Monday 10 February 2025 09:33:42 +0000 (0:00:00.895) 0:00:19.407 ******* 2025-02-10 09:35:10.910901 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:35:10.910913 | orchestrator | 2025-02-10 09:35:10.910925 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-10 09:35:10.910938 | orchestrator | Monday 10 February 2025 09:33:43 +0000 (0:00:00.268) 0:00:19.676 ******* 2025-02-10 09:35:10.910951 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:35:10.910963 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:35:10.910976 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:35:10.911015 | orchestrator | 2025-02-10 09:35:10.911029 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-10 09:35:10.911041 | orchestrator | Monday 10 February 2025 09:33:43 +0000 (0:00:00.701) 0:00:20.377 ******* 2025-02-10 09:35:10.911054 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:35:10.911072 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:35:10.911084 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:35:10.911097 | orchestrator | 2025-02-10 09:35:10.911109 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-10 09:35:10.911129 | orchestrator | Monday 10 February 2025 09:33:44 +0000 (0:00:00.520) 0:00:20.897 ******* 2025-02-10 09:35:10.911143 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:35:10.911155 | orchestrator | 2025-02-10 09:35:10.911168 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-10 09:35:10.911182 | orchestrator | Monday 10 February 2025 09:33:44 +0000 (0:00:00.381) 0:00:21.279 ******* 2025-02-10 09:35:10.911203 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:35:10.911217 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:35:10.911229 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:35:10.911242 | orchestrator | 2025-02-10 09:35:10.911255 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-02-10 09:35:10.911267 | orchestrator | Monday 10 February 2025 09:33:44 +0000 (0:00:00.306) 0:00:21.585 ******* 2025-02-10 09:35:10.911279 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:35:10.911292 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:35:10.911304 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:35:10.911317 | orchestrator | 2025-02-10 09:35:10.911329 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-02-10 09:35:10.911342 | orchestrator | Monday 10 February 2025 09:33:48 +0000 (0:00:03.606) 0:00:25.192 ******* 2025-02-10 09:35:10.911354 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-02-10 09:35:10.911367 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-02-10 09:35:10.911379 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-02-10 09:35:10.911392 | orchestrator | 2025-02-10 09:35:10.911405 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-02-10 09:35:10.911423 | orchestrator | Monday 10 February 2025 09:33:51 +0000 (0:00:03.276) 0:00:28.468 ******* 2025-02-10 09:35:10.911436 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-02-10 09:35:10.911449 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-02-10 09:35:10.911461 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-02-10 09:35:10.911473 | orchestrator | 2025-02-10 09:35:10.911486 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-02-10 09:35:10.911498 | orchestrator | Monday 10 February 2025 09:33:54 +0000 (0:00:03.121) 0:00:31.590 ******* 2025-02-10 09:35:10.911511 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-02-10 09:35:10.911523 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-02-10 09:35:10.911536 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-02-10 09:35:10.911548 | orchestrator | 2025-02-10 09:35:10.911561 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-02-10 09:35:10.911575 | orchestrator | Monday 10 February 2025 09:33:57 +0000 (0:00:02.607) 0:00:34.197 ******* 2025-02-10 09:35:10.911587 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:35:10.911599 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:35:10.911611 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:35:10.911624 | orchestrator | 2025-02-10 09:35:10.911646 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-02-10 09:35:10.911660 | orchestrator | Monday 10 February 2025 09:33:58 +0000 (0:00:00.489) 0:00:34.687 ******* 2025-02-10 09:35:10.911672 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:35:10.911685 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:35:10.911697 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:35:10.911710 | orchestrator | 2025-02-10 09:35:10.911722 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-02-10 09:35:10.911740 | orchestrator | Monday 10 February 2025 09:33:58 +0000 (0:00:00.478) 0:00:35.165 ******* 2025-02-10 09:35:10.911753 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:35:10.911765 | orchestrator | 2025-02-10 09:35:10.911778 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-02-10 09:35:10.911791 | orchestrator | Monday 10 February 2025 09:33:59 +0000 (0:00:00.919) 0:00:36.085 ******* 2025-02-10 09:35:10.911816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-10 09:35:10.911833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-10 09:35:10.911863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-10 09:35:10.911877 | orchestrator | 2025-02-10 09:35:10.911890 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-02-10 09:35:10.911902 | orchestrator | Monday 10 February 2025 09:34:01 +0000 (0:00:02.025) 0:00:38.110 ******* 2025-02-10 09:35:10.911915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-02-10 09:35:10.911935 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:35:10.911957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-02-10 09:35:10.911972 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:35:10.912018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-02-10 09:35:10.912042 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:35:10.912055 | orchestrator | 2025-02-10 09:35:10.912068 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-02-10 09:35:10.912080 | orchestrator | Monday 10 February 2025 09:34:03 +0000 (0:00:01.651) 0:00:39.762 ******* 2025-02-10 09:35:10.912102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-02-10 09:35:10.912123 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:35:10.912144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-02-10 09:35:10.912158 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:35:10.912172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-02-10 09:35:10.912192 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:35:10.912205 | orchestrator | 2025-02-10 09:35:10.912218 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-02-10 09:35:10.912230 | orchestrator | Monday 10 February 2025 09:34:04 +0000 (0:00:01.584) 0:00:41.346 ******* 2025-02-10 09:35:10.912251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-10 09:35:10.912265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-10 09:35:10.912294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-10 09:35:10.912309 | orchestrator | 2025-02-10 09:35:10.912322 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-02-10 09:35:10.912335 | orchestrator | Monday 10 February 2025 09:34:12 +0000 (0:00:08.124) 0:00:49.471 ******* 2025-02-10 09:35:10.912347 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:35:10.912360 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:35:10.912379 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:35:10.912391 | orchestrator | 2025-02-10 09:35:10.912404 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-02-10 09:35:10.912417 | orchestrator | Monday 10 February 2025 09:34:13 +0000 (0:00:00.924) 0:00:50.395 ******* 2025-02-10 09:35:10.912430 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:35:10.912442 | orchestrator | 2025-02-10 09:35:10.912454 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-02-10 09:35:10.912466 | orchestrator | Monday 10 February 2025 09:34:14 +0000 (0:00:00.930) 0:00:51.326 ******* 2025-02-10 09:35:10.912479 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:35:10.912491 | orchestrator | 2025-02-10 09:35:10.912503 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-02-10 09:35:10.912515 | orchestrator | Monday 10 February 2025 09:34:17 +0000 (0:00:02.689) 0:00:54.015 ******* 2025-02-10 09:35:10.912528 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:35:10.912540 | orchestrator | 2025-02-10 09:35:10.912553 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-02-10 09:35:10.912565 | orchestrator | Monday 10 February 2025 09:34:20 +0000 (0:00:02.828) 0:00:56.844 ******* 2025-02-10 09:35:10.912578 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:35:10.912590 | orchestrator | 2025-02-10 09:35:10.912603 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-02-10 09:35:10.912615 | orchestrator | Monday 10 February 2025 09:34:32 +0000 (0:00:12.352) 0:01:09.197 ******* 2025-02-10 09:35:10.912627 | orchestrator | 2025-02-10 09:35:10.912640 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-02-10 09:35:10.912652 | orchestrator | Monday 10 February 2025 09:34:32 +0000 (0:00:00.071) 0:01:09.268 ******* 2025-02-10 09:35:10.912664 | orchestrator | 2025-02-10 09:35:10.912676 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-02-10 09:35:10.912689 | orchestrator | Monday 10 February 2025 09:34:32 +0000 (0:00:00.063) 0:01:09.332 ******* 2025-02-10 09:35:10.912701 | orchestrator | 2025-02-10 09:35:10.912713 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-02-10 09:35:10.912725 | orchestrator | Monday 10 February 2025 09:34:32 +0000 (0:00:00.218) 0:01:09.550 ******* 2025-02-10 09:35:10.912738 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:35:10.912750 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:35:10.912762 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:35:10.912774 | orchestrator | 2025-02-10 09:35:10.912786 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:35:10.912799 | orchestrator | testbed-node-0 : ok=41  changed=11  unreachable=0 failed=0 skipped=29  rescued=0 ignored=0 2025-02-10 09:35:10.912811 | orchestrator | testbed-node-1 : ok=38  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-02-10 09:35:10.912824 | orchestrator | testbed-node-2 : ok=38  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-02-10 09:35:10.912837 | orchestrator | 2025-02-10 09:35:10.912849 | orchestrator | 2025-02-10 09:35:10.912862 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:35:10.912880 | orchestrator | Monday 10 February 2025 09:35:09 +0000 (0:00:36.592) 0:01:46.142 ******* 2025-02-10 09:35:10.912893 | orchestrator | =============================================================================== 2025-02-10 09:35:10.912906 | orchestrator | horizon : Restart horizon container ------------------------------------ 36.59s 2025-02-10 09:35:10.912918 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 12.35s 2025-02-10 09:35:10.912930 | orchestrator | horizon : Deploy horizon container -------------------------------------- 8.12s 2025-02-10 09:35:10.912950 | orchestrator | horizon : Copying over config.json files for services ------------------- 3.61s 2025-02-10 09:35:13.968233 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 3.28s 2025-02-10 09:35:13.968368 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 3.12s 2025-02-10 09:35:13.968386 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.83s 2025-02-10 09:35:13.968399 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.69s 2025-02-10 09:35:13.968411 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.61s 2025-02-10 09:35:13.968424 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 2.03s 2025-02-10 09:35:13.968436 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.99s 2025-02-10 09:35:13.968449 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 1.65s 2025-02-10 09:35:13.968462 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.58s 2025-02-10 09:35:13.968474 | orchestrator | horizon : include_tasks ------------------------------------------------- 1.17s 2025-02-10 09:35:13.968487 | orchestrator | horizon : include_tasks ------------------------------------------------- 1.11s 2025-02-10 09:35:13.968499 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.93s 2025-02-10 09:35:13.968511 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.92s 2025-02-10 09:35:13.968523 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.92s 2025-02-10 09:35:13.968535 | orchestrator | horizon : Update policy file name --------------------------------------- 0.90s 2025-02-10 09:35:13.968548 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.70s 2025-02-10 09:35:13.968560 | orchestrator | 2025-02-10 09:35:10 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:35:13.968575 | orchestrator | 2025-02-10 09:35:10 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:13.968618 | orchestrator | 2025-02-10 09:35:13 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:35:17.006467 | orchestrator | 2025-02-10 09:35:13 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:35:17.006607 | orchestrator | 2025-02-10 09:35:13 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:17.006643 | orchestrator | 2025-02-10 09:35:17 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:35:20.055338 | orchestrator | 2025-02-10 09:35:17 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:35:20.055486 | orchestrator | 2025-02-10 09:35:17 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:20.055530 | orchestrator | 2025-02-10 09:35:20 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:35:20.057067 | orchestrator | 2025-02-10 09:35:20 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state STARTED 2025-02-10 09:35:23.109572 | orchestrator | 2025-02-10 09:35:20 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:23.109712 | orchestrator | 2025-02-10 09:35:23 | INFO  | Task d3250230-bc7c-416e-ae3c-a0b2fb5227a7 is in state STARTED 2025-02-10 09:35:23.110089 | orchestrator | 2025-02-10 09:35:23 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:35:23.110109 | orchestrator | 2025-02-10 09:35:23 | INFO  | Task 11fb0442-3903-4255-9c4a-c97b2df884f6 is in state SUCCESS 2025-02-10 09:35:23.112150 | orchestrator | 2025-02-10 09:35:23.112191 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-02-10 09:35:23.112201 | orchestrator | 2025-02-10 09:35:23.112207 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-02-10 09:35:23.112212 | orchestrator | 2025-02-10 09:35:23.112416 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-02-10 09:35:23.112431 | orchestrator | Monday 10 February 2025 09:33:04 +0000 (0:00:01.176) 0:00:01.176 ******* 2025-02-10 09:35:23.112437 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:35:23.112457 | orchestrator | 2025-02-10 09:35:23.112466 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-02-10 09:35:23.112474 | orchestrator | Monday 10 February 2025 09:33:05 +0000 (0:00:00.632) 0:00:01.809 ******* 2025-02-10 09:35:23.112484 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-0) 2025-02-10 09:35:23.112493 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-1) 2025-02-10 09:35:23.112502 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-2) 2025-02-10 09:35:23.112510 | orchestrator | 2025-02-10 09:35:23.112518 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-02-10 09:35:23.112526 | orchestrator | Monday 10 February 2025 09:33:06 +0000 (0:00:01.027) 0:00:02.836 ******* 2025-02-10 09:35:23.112534 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:35:23.112542 | orchestrator | 2025-02-10 09:35:23.112551 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-02-10 09:35:23.112557 | orchestrator | Monday 10 February 2025 09:33:07 +0000 (0:00:00.922) 0:00:03.759 ******* 2025-02-10 09:35:23.112561 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:35:23.112567 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:35:23.112572 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:35:23.112577 | orchestrator | 2025-02-10 09:35:23.112585 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-02-10 09:35:23.112590 | orchestrator | Monday 10 February 2025 09:33:08 +0000 (0:00:00.710) 0:00:04.469 ******* 2025-02-10 09:35:23.112595 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:35:23.112600 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:35:23.112604 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:35:23.112609 | orchestrator | 2025-02-10 09:35:23.112614 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-02-10 09:35:23.112619 | orchestrator | Monday 10 February 2025 09:33:08 +0000 (0:00:00.360) 0:00:04.830 ******* 2025-02-10 09:35:23.112623 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:35:23.112628 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:35:23.112633 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:35:23.112637 | orchestrator | 2025-02-10 09:35:23.112642 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-02-10 09:35:23.112647 | orchestrator | Monday 10 February 2025 09:33:09 +0000 (0:00:01.082) 0:00:05.912 ******* 2025-02-10 09:35:23.112652 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:35:23.112657 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:35:23.112661 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:35:23.112666 | orchestrator | 2025-02-10 09:35:23.112671 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-02-10 09:35:23.112676 | orchestrator | Monday 10 February 2025 09:33:09 +0000 (0:00:00.344) 0:00:06.257 ******* 2025-02-10 09:35:23.112680 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:35:23.112685 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:35:23.112690 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:35:23.112694 | orchestrator | 2025-02-10 09:35:23.112699 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-02-10 09:35:23.112705 | orchestrator | Monday 10 February 2025 09:33:10 +0000 (0:00:00.341) 0:00:06.598 ******* 2025-02-10 09:35:23.112710 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:35:23.112714 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:35:23.112719 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:35:23.112723 | orchestrator | 2025-02-10 09:35:23.112728 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-02-10 09:35:23.112733 | orchestrator | Monday 10 February 2025 09:33:10 +0000 (0:00:00.402) 0:00:07.000 ******* 2025-02-10 09:35:23.112747 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:35:23.112752 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:35:23.112757 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:35:23.112777 | orchestrator | 2025-02-10 09:35:23.112781 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-02-10 09:35:23.112786 | orchestrator | Monday 10 February 2025 09:33:11 +0000 (0:00:00.637) 0:00:07.637 ******* 2025-02-10 09:35:23.112791 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:35:23.112796 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:35:23.112800 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:35:23.112805 | orchestrator | 2025-02-10 09:35:23.112810 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-02-10 09:35:23.112814 | orchestrator | Monday 10 February 2025 09:33:11 +0000 (0:00:00.364) 0:00:08.002 ******* 2025-02-10 09:35:23.112819 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-02-10 09:35:23.112824 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-10 09:35:23.112828 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-10 09:35:23.112833 | orchestrator | 2025-02-10 09:35:23.112838 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-02-10 09:35:23.112842 | orchestrator | Monday 10 February 2025 09:33:12 +0000 (0:00:00.769) 0:00:08.771 ******* 2025-02-10 09:35:23.112847 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:35:23.112852 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:35:23.112857 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:35:23.112861 | orchestrator | 2025-02-10 09:35:23.112866 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-02-10 09:35:23.112871 | orchestrator | Monday 10 February 2025 09:33:12 +0000 (0:00:00.530) 0:00:09.301 ******* 2025-02-10 09:35:23.112882 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-02-10 09:35:23.112887 | orchestrator | changed: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-10 09:35:23.112892 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-10 09:35:23.112897 | orchestrator | 2025-02-10 09:35:23.112901 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-02-10 09:35:23.112906 | orchestrator | Monday 10 February 2025 09:33:15 +0000 (0:00:02.512) 0:00:11.814 ******* 2025-02-10 09:35:23.112911 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-10 09:35:23.112916 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-10 09:35:23.112923 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-10 09:35:23.112928 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:35:23.112933 | orchestrator | 2025-02-10 09:35:23.112938 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-02-10 09:35:23.112942 | orchestrator | Monday 10 February 2025 09:33:15 +0000 (0:00:00.527) 0:00:12.342 ******* 2025-02-10 09:35:23.112950 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-02-10 09:35:23.112957 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-02-10 09:35:23.112962 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-02-10 09:35:23.112967 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:35:23.112976 | orchestrator | 2025-02-10 09:35:23.112981 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-02-10 09:35:23.113004 | orchestrator | Monday 10 February 2025 09:33:16 +0000 (0:00:00.824) 0:00:13.166 ******* 2025-02-10 09:35:23.113011 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-02-10 09:35:23.113021 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-02-10 09:35:23.113026 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-02-10 09:35:23.113030 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:35:23.113035 | orchestrator | 2025-02-10 09:35:23.113041 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-02-10 09:35:23.113047 | orchestrator | Monday 10 February 2025 09:33:16 +0000 (0:00:00.179) 0:00:13.346 ******* 2025-02-10 09:35:23.113054 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': 'ff592b750b12', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-02-10 09:33:13.983009', 'end': '2025-02-10 09:33:14.024022', 'delta': '0:00:00.041013', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['ff592b750b12'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-02-10 09:35:23.113071 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '547f5fa1e985', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-02-10 09:33:14.602739', 'end': '2025-02-10 09:33:14.640201', 'delta': '0:00:00.037462', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['547f5fa1e985'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-02-10 09:35:23.113078 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': 'd5aa104136e6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-02-10 09:33:15.216502', 'end': '2025-02-10 09:33:15.256268', 'delta': '0:00:00.039766', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d5aa104136e6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-02-10 09:35:23.113088 | orchestrator | 2025-02-10 09:35:23.113093 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-02-10 09:35:23.113099 | orchestrator | Monday 10 February 2025 09:33:17 +0000 (0:00:00.253) 0:00:13.600 ******* 2025-02-10 09:35:23.113104 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:35:23.113110 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:35:23.113115 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:35:23.113121 | orchestrator | 2025-02-10 09:35:23.113126 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-02-10 09:35:23.113131 | orchestrator | Monday 10 February 2025 09:33:17 +0000 (0:00:00.707) 0:00:14.307 ******* 2025-02-10 09:35:23.113137 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-02-10 09:35:23.113143 | orchestrator | 2025-02-10 09:35:23.113148 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-02-10 09:35:23.113154 | orchestrator | Monday 10 February 2025 09:33:19 +0000 (0:00:01.600) 0:00:15.908 ******* 2025-02-10 09:35:23.113159 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:35:23.113165 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:35:23.113170 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:35:23.113175 | orchestrator | 2025-02-10 09:35:23.113181 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-02-10 09:35:23.113186 | orchestrator | Monday 10 February 2025 09:33:20 +0000 (0:00:00.586) 0:00:16.494 ******* 2025-02-10 09:35:23.113191 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:35:23.113197 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:35:23.113202 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:35:23.113208 | orchestrator | 2025-02-10 09:35:23.113213 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-02-10 09:35:23.113218 | orchestrator | Monday 10 February 2025 09:33:20 +0000 (0:00:00.572) 0:00:17.067 ******* 2025-02-10 09:35:23.113224 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:35:23.113229 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:35:23.113235 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:35:23.113240 | orchestrator | 2025-02-10 09:35:23.113245 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-02-10 09:35:23.113251 | orchestrator | Monday 10 February 2025 09:33:21 +0000 (0:00:00.327) 0:00:17.394 ******* 2025-02-10 09:35:23.113256 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:35:23.113262 | orchestrator | 2025-02-10 09:35:23.113268 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-02-10 09:35:23.113277 | orchestrator | Monday 10 February 2025 09:33:21 +0000 (0:00:00.129) 0:00:17.524 ******* 2025-02-10 09:35:23.113285 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:35:23.113293 | orchestrator | 2025-02-10 09:35:23.113300 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-02-10 09:35:23.113308 | orchestrator | Monday 10 February 2025 09:33:21 +0000 (0:00:00.256) 0:00:17.780 ******* 2025-02-10 09:35:23.113316 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:35:23.113324 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:35:23.113333 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:35:23.113340 | orchestrator | 2025-02-10 09:35:23.113350 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-02-10 09:35:23.113356 | orchestrator | Monday 10 February 2025 09:33:22 +0000 (0:00:00.597) 0:00:18.377 ******* 2025-02-10 09:35:23.113364 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:35:23.113372 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:35:23.113380 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:35:23.113388 | orchestrator | 2025-02-10 09:35:23.113396 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-02-10 09:35:23.113404 | orchestrator | Monday 10 February 2025 09:33:22 +0000 (0:00:00.487) 0:00:18.865 ******* 2025-02-10 09:35:23.113412 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:35:23.113420 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:35:23.113434 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:35:23.113442 | orchestrator | 2025-02-10 09:35:23.113450 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-02-10 09:35:23.113457 | orchestrator | Monday 10 February 2025 09:33:22 +0000 (0:00:00.461) 0:00:19.326 ******* 2025-02-10 09:35:23.113465 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:35:23.113472 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:35:23.113485 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:35:23.113494 | orchestrator | 2025-02-10 09:35:23.113502 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-02-10 09:35:23.113510 | orchestrator | Monday 10 February 2025 09:33:23 +0000 (0:00:00.451) 0:00:19.777 ******* 2025-02-10 09:35:23.113517 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:35:23.113526 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:35:23.113532 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:35:23.113539 | orchestrator | 2025-02-10 09:35:23.113547 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-02-10 09:35:23.113555 | orchestrator | Monday 10 February 2025 09:33:24 +0000 (0:00:00.590) 0:00:20.368 ******* 2025-02-10 09:35:23.113563 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:35:23.113571 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:35:23.113579 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:35:23.113587 | orchestrator | 2025-02-10 09:35:23.113595 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-02-10 09:35:23.113604 | orchestrator | Monday 10 February 2025 09:33:24 +0000 (0:00:00.384) 0:00:20.752 ******* 2025-02-10 09:35:23.113611 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:35:23.113619 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:35:23.113627 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:35:23.113635 | orchestrator | 2025-02-10 09:35:23.113641 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-02-10 09:35:23.113649 | orchestrator | Monday 10 February 2025 09:33:24 +0000 (0:00:00.457) 0:00:21.210 ******* 2025-02-10 09:35:23.113656 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--70e6c2b1--f69e--5685--9251--bc72a13d87ec-osd--block--70e6c2b1--f69e--5685--9251--bc72a13d87ec', 'dm-uuid-LVM-tRyDiHQo3Yjn1VzNOw3ugs1Wn82jeRSKRwC4KYrSgG2GnhExKIfb2XxWSKPReU0O'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-10 09:35:23.113662 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f3b4a615--299b--50bf--af8e--26b6dc38e729-osd--block--f3b4a615--299b--50bf--af8e--26b6dc38e729', 'dm-uuid-LVM-JHn31bh3nY2HLNSzW3dR8R9cUD0IgsMKo81TxTwb7lrqOvuyQDoSfAU0EqLYI9pE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-10 09:35:23.113667 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:35:23.113673 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:35:23.113687 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:35:23.113692 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:35:23.113703 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:35:23.113709 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5101bad7--da03--58be--8044--cbe4500fcec9-osd--block--5101bad7--da03--58be--8044--cbe4500fcec9', 'dm-uuid-LVM-kOxijTWfUjiF5H2iNDT8sQh68XR7izWhfpOIMTd85vAZEHMD75gm4lXR3FKeWE8Q'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-10 09:35:23.113714 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:35:23.113719 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d59ecc87--3940--56cd--881a--fbc914ec02de-osd--block--d59ecc87--3940--56cd--881a--fbc914ec02de', 'dm-uuid-LVM-zmO3sPN2RjX9IesfI6WIJfNq2jkKzj9sOULR9HRtVKlTmR7lZamJbFkYqJEMAZJZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-10 09:35:23.113724 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:35:23.113733 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:35:23.113752 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--89c58721--f175--5d0e--8750--3436c1d71ced-osd--block--89c58721--f175--5d0e--8750--3436c1d71ced', 'dm-uuid-LVM-JnoSO34eYdGnWTsCakmewB44wX9WXEtdmovb6sRP6nxMebcoeelGfZ966qJm6W0U'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-10 09:35:23.113758 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:35:23.113767 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:35:23.113774 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8bdd934e-bb0e-4b1c-85b6-0f94f416d8b7', 'scsi-SQEMU_QEMU_HARDDISK_8bdd934e-bb0e-4b1c-85b6-0f94f416d8b7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8bdd934e-bb0e-4b1c-85b6-0f94f416d8b7-part1', 'scsi-SQEMU_QEMU_HARDDISK_8bdd934e-bb0e-4b1c-85b6-0f94f416d8b7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8bdd934e-bb0e-4b1c-85b6-0f94f416d8b7-part14', 'scsi-SQEMU_QEMU_HARDDISK_8bdd934e-bb0e-4b1c-85b6-0f94f416d8b7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8bdd934e-bb0e-4b1c-85b6-0f94f416d8b7-part15', 'scsi-SQEMU_QEMU_HARDDISK_8bdd934e-bb0e-4b1c-85b6-0f94f416d8b7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8bdd934e-bb0e-4b1c-85b6-0f94f416d8b7-part16', 'scsi-SQEMU_QEMU_HARDDISK_8bdd934e-bb0e-4b1c-85b6-0f94f416d8b7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:35:23.113784 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--989340a3--ac62--57b3--a342--92d58018bc1c-osd--block--989340a3--ac62--57b3--a342--92d58018bc1c', 'dm-uuid-LVM-kpj38q7QthTyMHyxijih2mNuaS0gaM14qrVuDa0sBq3IdYilV1K0Dtyrg5332VUh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-10 09:35:23.113800 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:35:23.113809 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--70e6c2b1--f69e--5685--9251--bc72a13d87ec-osd--block--70e6c2b1--f69e--5685--9251--bc72a13d87ec'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GRE1ta-6boF-QPTi-Jfmc-f78s-tRL3-IBBacy', 'scsi-0QEMU_QEMU_HARDDISK_094c1351-6c25-40a9-b10a-7f3d6a96f205', 'scsi-SQEMU_QEMU_HARDDISK_094c1351-6c25-40a9-b10a-7f3d6a96f205'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:35:23.113824 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:35:23.113832 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:35:23.113841 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f3b4a615--299b--50bf--af8e--26b6dc38e729-osd--block--f3b4a615--299b--50bf--af8e--26b6dc38e729'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-H3QuVZ-RPGB-y4GH-7c1v-8Blc-aucD-jXCBCR', 'scsi-0QEMU_QEMU_HARDDISK_494ee814-0dd9-4f0f-8082-b266e2c53997', 'scsi-SQEMU_QEMU_HARDDISK_494ee814-0dd9-4f0f-8082-b266e2c53997'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:35:23.113849 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:35:23.113858 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_086c202d-0ccf-4be9-aa6b-e4e971478b82', 'scsi-SQEMU_QEMU_HARDDISK_086c202d-0ccf-4be9-aa6b-e4e971478b82'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:35:23.113873 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:35:23.113882 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:35:23.113895 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:35:23.113911 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-02-10-08-33-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:35:23.113921 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:35:23.113930 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:35:23.113938 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:35:23.113948 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:35:23.113956 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:35:23.113969 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:35:23.113997 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4aefdc38-3054-474e-a34a-07d97ce8643d', 'scsi-SQEMU_QEMU_HARDDISK_4aefdc38-3054-474e-a34a-07d97ce8643d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4aefdc38-3054-474e-a34a-07d97ce8643d-part1', 'scsi-SQEMU_QEMU_HARDDISK_4aefdc38-3054-474e-a34a-07d97ce8643d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4aefdc38-3054-474e-a34a-07d97ce8643d-part14', 'scsi-SQEMU_QEMU_HARDDISK_4aefdc38-3054-474e-a34a-07d97ce8643d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4aefdc38-3054-474e-a34a-07d97ce8643d-part15', 'scsi-SQEMU_QEMU_HARDDISK_4aefdc38-3054-474e-a34a-07d97ce8643d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4aefdc38-3054-474e-a34a-07d97ce8643d-part16', 'scsi-SQEMU_QEMU_HARDDISK_4aefdc38-3054-474e-a34a-07d97ce8643d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:35:23.114007 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:35:23.114048 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--5101bad7--da03--58be--8044--cbe4500fcec9-osd--block--5101bad7--da03--58be--8044--cbe4500fcec9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AfV1Ut-imLT-GJEx-qQZs-mDvO-OD8D-loiCNw', 'scsi-0QEMU_QEMU_HARDDISK_103f3392-831d-4ee6-b0f0-d6be015816d3', 'scsi-SQEMU_QEMU_HARDDISK_103f3392-831d-4ee6-b0f0-d6be015816d3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:35:23.114059 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:35:23.114074 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d59ecc87--3940--56cd--881a--fbc914ec02de-osd--block--d59ecc87--3940--56cd--881a--fbc914ec02de'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-RICemi-Zbh1-pks0-7V8C-6Pf8-EuYC-pPo3u4', 'scsi-0QEMU_QEMU_HARDDISK_23794fae-2c08-458a-becf-a15050b8218b', 'scsi-SQEMU_QEMU_HARDDISK_23794fae-2c08-458a-becf-a15050b8218b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:35:23.114091 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bb1a57e-a3aa-41a1-8378-2ba5a5124dde', 'scsi-SQEMU_QEMU_HARDDISK_7bb1a57e-a3aa-41a1-8378-2ba5a5124dde'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bb1a57e-a3aa-41a1-8378-2ba5a5124dde-part1', 'scsi-SQEMU_QEMU_HARDDISK_7bb1a57e-a3aa-41a1-8378-2ba5a5124dde-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bb1a57e-a3aa-41a1-8378-2ba5a5124dde-part14', 'scsi-SQEMU_QEMU_HARDDISK_7bb1a57e-a3aa-41a1-8378-2ba5a5124dde-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bb1a57e-a3aa-41a1-8378-2ba5a5124dde-part15', 'scsi-SQEMU_QEMU_HARDDISK_7bb1a57e-a3aa-41a1-8378-2ba5a5124dde-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bb1a57e-a3aa-41a1-8378-2ba5a5124dde-part16', 'scsi-SQEMU_QEMU_HARDDISK_7bb1a57e-a3aa-41a1-8378-2ba5a5124dde-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:35:23.114105 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_492baa9f-f661-44dd-a3d2-70d79942748c', 'scsi-SQEMU_QEMU_HARDDISK_492baa9f-f661-44dd-a3d2-70d79942748c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:35:23.114114 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--89c58721--f175--5d0e--8750--3436c1d71ced-osd--block--89c58721--f175--5d0e--8750--3436c1d71ced'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-u2fbYc-N0Pr-avQV-caWe-H1nP-U6AI-72y8iY', 'scsi-0QEMU_QEMU_HARDDISK_a31d8f91-c02a-4f65-9bd6-abd5e53b34f2', 'scsi-SQEMU_QEMU_HARDDISK_a31d8f91-c02a-4f65-9bd6-abd5e53b34f2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:35:23.114128 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-02-10-08-33-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:35:23.114136 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:35:23.114144 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--989340a3--ac62--57b3--a342--92d58018bc1c-osd--block--989340a3--ac62--57b3--a342--92d58018bc1c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2lFnkL-eTjr-jX59-b9Ca-Rsp5-OoNK-4ot2XJ', 'scsi-0QEMU_QEMU_HARDDISK_be832b54-23bf-4f17-8551-69f0e04b6625', 'scsi-SQEMU_QEMU_HARDDISK_be832b54-23bf-4f17-8551-69f0e04b6625'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:35:23.114158 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_809e68db-7594-4e4e-90c0-4a7ae6eb5d4d', 'scsi-SQEMU_QEMU_HARDDISK_809e68db-7594-4e4e-90c0-4a7ae6eb5d4d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:35:23.114167 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-02-10-08-33-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:35:23.114174 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:35:23.114182 | orchestrator | 2025-02-10 09:35:23.114190 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-02-10 09:35:23.114198 | orchestrator | Monday 10 February 2025 09:33:25 +0000 (0:00:00.889) 0:00:22.099 ******* 2025-02-10 09:35:23.114205 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-02-10 09:35:23.114213 | orchestrator | 2025-02-10 09:35:23.114221 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-02-10 09:35:23.114229 | orchestrator | Monday 10 February 2025 09:33:27 +0000 (0:00:01.706) 0:00:23.806 ******* 2025-02-10 09:35:23.114237 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:35:23.114242 | orchestrator | 2025-02-10 09:35:23.114253 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-02-10 09:35:23.114258 | orchestrator | Monday 10 February 2025 09:33:27 +0000 (0:00:00.162) 0:00:23.968 ******* 2025-02-10 09:35:23.114263 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:35:23.114268 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:35:23.114273 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:35:23.114277 | orchestrator | 2025-02-10 09:35:23.114282 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-02-10 09:35:23.114287 | orchestrator | Monday 10 February 2025 09:33:28 +0000 (0:00:00.424) 0:00:24.392 ******* 2025-02-10 09:35:23.114292 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:35:23.114301 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:35:23.114307 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:35:23.114312 | orchestrator | 2025-02-10 09:35:23.114316 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-02-10 09:35:23.114321 | orchestrator | Monday 10 February 2025 09:33:28 +0000 (0:00:00.717) 0:00:25.110 ******* 2025-02-10 09:35:23.114326 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:35:23.114331 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:35:23.114335 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:35:23.114340 | orchestrator | 2025-02-10 09:35:23.114345 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-02-10 09:35:23.114350 | orchestrator | Monday 10 February 2025 09:33:29 +0000 (0:00:00.330) 0:00:25.441 ******* 2025-02-10 09:35:23.114354 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:35:23.114359 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:35:23.114364 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:35:23.114368 | orchestrator | 2025-02-10 09:35:23.114373 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-02-10 09:35:23.114378 | orchestrator | Monday 10 February 2025 09:33:30 +0000 (0:00:00.975) 0:00:26.417 ******* 2025-02-10 09:35:23.114383 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:35:23.114388 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:35:23.114392 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:35:23.114397 | orchestrator | 2025-02-10 09:35:23.114402 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-02-10 09:35:23.114406 | orchestrator | Monday 10 February 2025 09:33:30 +0000 (0:00:00.342) 0:00:26.759 ******* 2025-02-10 09:35:23.114411 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:35:23.114416 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:35:23.114423 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:35:23.114430 | orchestrator | 2025-02-10 09:35:23.114438 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-02-10 09:35:23.114446 | orchestrator | Monday 10 February 2025 09:33:30 +0000 (0:00:00.558) 0:00:27.318 ******* 2025-02-10 09:35:23.114454 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:35:23.114462 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:35:23.114473 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:35:23.114481 | orchestrator | 2025-02-10 09:35:23.114488 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-02-10 09:35:23.114496 | orchestrator | Monday 10 February 2025 09:33:31 +0000 (0:00:00.637) 0:00:27.955 ******* 2025-02-10 09:35:23.114504 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-10 09:35:23.114512 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-10 09:35:23.114520 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-02-10 09:35:23.114528 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-10 09:35:23.114536 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:35:23.114543 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-02-10 09:35:23.114552 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-02-10 09:35:23.114560 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:35:23.114569 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-02-10 09:35:23.114581 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-02-10 09:35:23.114589 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-02-10 09:35:23.114597 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:35:23.114605 | orchestrator | 2025-02-10 09:35:23.114613 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-02-10 09:35:23.114626 | orchestrator | Monday 10 February 2025 09:33:32 +0000 (0:00:01.331) 0:00:29.287 ******* 2025-02-10 09:35:23.114635 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-10 09:35:23.114643 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-02-10 09:35:23.114651 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-10 09:35:23.114660 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-02-10 09:35:23.114668 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-02-10 09:35:23.114675 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-10 09:35:23.114682 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:35:23.114690 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-02-10 09:35:23.114698 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-02-10 09:35:23.114706 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:35:23.114713 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-02-10 09:35:23.114720 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:35:23.114729 | orchestrator | 2025-02-10 09:35:23.114737 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-02-10 09:35:23.114745 | orchestrator | Monday 10 February 2025 09:33:33 +0000 (0:00:00.796) 0:00:30.084 ******* 2025-02-10 09:35:23.114753 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-02-10 09:35:23.114761 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-02-10 09:35:23.114768 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-02-10 09:35:23.114773 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-02-10 09:35:23.114778 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-02-10 09:35:23.114783 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-02-10 09:35:23.114788 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-02-10 09:35:23.114793 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-02-10 09:35:23.114797 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-02-10 09:35:23.114802 | orchestrator | 2025-02-10 09:35:23.114807 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-02-10 09:35:23.114812 | orchestrator | Monday 10 February 2025 09:33:35 +0000 (0:00:02.050) 0:00:32.134 ******* 2025-02-10 09:35:23.114816 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-10 09:35:23.114821 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-10 09:35:23.114826 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-10 09:35:23.114831 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:35:23.114836 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-02-10 09:35:23.114841 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-02-10 09:35:23.114845 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-02-10 09:35:23.114850 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:35:23.114855 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-02-10 09:35:23.114859 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-02-10 09:35:23.114864 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-02-10 09:35:23.114869 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:35:23.114874 | orchestrator | 2025-02-10 09:35:23.114878 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-02-10 09:35:23.114883 | orchestrator | Monday 10 February 2025 09:33:36 +0000 (0:00:00.718) 0:00:32.853 ******* 2025-02-10 09:35:23.114893 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-10 09:35:23.114898 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-10 09:35:23.114902 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-10 09:35:23.114907 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-02-10 09:35:23.114912 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-02-10 09:35:23.114916 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:35:23.114921 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-02-10 09:35:23.114926 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:35:23.114931 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-02-10 09:35:23.114941 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-02-10 09:35:23.114946 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-02-10 09:35:23.114951 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:35:23.114955 | orchestrator | 2025-02-10 09:35:23.114960 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-02-10 09:35:23.114965 | orchestrator | Monday 10 February 2025 09:33:36 +0000 (0:00:00.453) 0:00:33.306 ******* 2025-02-10 09:35:23.114969 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-02-10 09:35:23.114975 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-02-10 09:35:23.114980 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-02-10 09:35:23.115022 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:35:23.115028 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-02-10 09:35:23.115034 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-02-10 09:35:23.115039 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-02-10 09:35:23.115044 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:35:23.115049 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-02-10 09:35:23.115057 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-02-10 09:35:23.115062 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-02-10 09:35:23.115067 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:35:23.115072 | orchestrator | 2025-02-10 09:35:23.115077 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-02-10 09:35:23.115085 | orchestrator | Monday 10 February 2025 09:33:37 +0000 (0:00:00.417) 0:00:33.724 ******* 2025-02-10 09:35:23.115090 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:35:23.115095 | orchestrator | 2025-02-10 09:35:23.115100 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-10 09:35:23.115105 | orchestrator | Monday 10 February 2025 09:33:38 +0000 (0:00:00.811) 0:00:34.536 ******* 2025-02-10 09:35:23.115110 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:35:23.115115 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:35:23.115119 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:35:23.115124 | orchestrator | 2025-02-10 09:35:23.115129 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-10 09:35:23.115134 | orchestrator | Monday 10 February 2025 09:33:38 +0000 (0:00:00.333) 0:00:34.870 ******* 2025-02-10 09:35:23.115138 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:35:23.115143 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:35:23.115148 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:35:23.115153 | orchestrator | 2025-02-10 09:35:23.115158 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-10 09:35:23.115166 | orchestrator | Monday 10 February 2025 09:33:38 +0000 (0:00:00.350) 0:00:35.220 ******* 2025-02-10 09:35:23.115171 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:35:23.115176 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:35:23.115180 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:35:23.115185 | orchestrator | 2025-02-10 09:35:23.115190 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-10 09:35:23.115195 | orchestrator | Monday 10 February 2025 09:33:39 +0000 (0:00:00.378) 0:00:35.599 ******* 2025-02-10 09:35:23.115199 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:35:23.115204 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:35:23.115209 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:35:23.115213 | orchestrator | 2025-02-10 09:35:23.115218 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-10 09:35:23.115223 | orchestrator | Monday 10 February 2025 09:33:39 +0000 (0:00:00.704) 0:00:36.304 ******* 2025-02-10 09:35:23.115227 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:35:23.115232 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:35:23.115237 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:35:23.115242 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:35:23.115246 | orchestrator | 2025-02-10 09:35:23.115251 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-10 09:35:23.115256 | orchestrator | Monday 10 February 2025 09:33:40 +0000 (0:00:00.489) 0:00:36.794 ******* 2025-02-10 09:35:23.115261 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:35:23.115265 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:35:23.115270 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:35:23.115275 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:35:23.115280 | orchestrator | 2025-02-10 09:35:23.115284 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-10 09:35:23.115289 | orchestrator | Monday 10 February 2025 09:33:40 +0000 (0:00:00.431) 0:00:37.226 ******* 2025-02-10 09:35:23.115294 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:35:23.115299 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:35:23.115303 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:35:23.115308 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:35:23.115313 | orchestrator | 2025-02-10 09:35:23.115317 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:35:23.115322 | orchestrator | Monday 10 February 2025 09:33:41 +0000 (0:00:00.441) 0:00:37.667 ******* 2025-02-10 09:35:23.115327 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:35:23.115332 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:35:23.115336 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:35:23.115344 | orchestrator | 2025-02-10 09:35:23.115349 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-10 09:35:23.115354 | orchestrator | Monday 10 February 2025 09:33:41 +0000 (0:00:00.405) 0:00:38.073 ******* 2025-02-10 09:35:23.115359 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-02-10 09:35:23.115363 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-02-10 09:35:23.115368 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-02-10 09:35:23.115373 | orchestrator | 2025-02-10 09:35:23.115378 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-10 09:35:23.115383 | orchestrator | Monday 10 February 2025 09:33:42 +0000 (0:00:01.281) 0:00:39.355 ******* 2025-02-10 09:35:23.115387 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:35:23.115392 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:35:23.115397 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:35:23.115402 | orchestrator | 2025-02-10 09:35:23.115406 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:35:23.115414 | orchestrator | Monday 10 February 2025 09:33:43 +0000 (0:00:00.489) 0:00:39.844 ******* 2025-02-10 09:35:23.115419 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:35:23.115424 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:35:23.115429 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:35:23.115433 | orchestrator | 2025-02-10 09:35:23.115438 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-10 09:35:23.115445 | orchestrator | Monday 10 February 2025 09:33:43 +0000 (0:00:00.368) 0:00:40.213 ******* 2025-02-10 09:35:23.115450 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-10 09:35:23.115455 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:35:23.115460 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-10 09:35:23.115464 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:35:23.115472 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-10 09:35:23.115477 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:35:23.115482 | orchestrator | 2025-02-10 09:35:23.115487 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-10 09:35:23.115492 | orchestrator | Monday 10 February 2025 09:33:44 +0000 (0:00:00.538) 0:00:40.752 ******* 2025-02-10 09:35:23.115497 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-02-10 09:35:23.115502 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:35:23.115507 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-02-10 09:35:23.115512 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:35:23.115517 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-02-10 09:35:23.115522 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:35:23.115527 | orchestrator | 2025-02-10 09:35:23.115531 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-10 09:35:23.115536 | orchestrator | Monday 10 February 2025 09:33:45 +0000 (0:00:00.727) 0:00:41.480 ******* 2025-02-10 09:35:23.115541 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:35:23.115546 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:35:23.115550 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-02-10 09:35:23.115555 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:35:23.115560 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:35:23.115564 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-02-10 09:35:23.115569 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-02-10 09:35:23.115574 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-02-10 09:35:23.115578 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:35:23.115583 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-02-10 09:35:23.115588 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-02-10 09:35:23.115593 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:35:23.115597 | orchestrator | 2025-02-10 09:35:23.115602 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-02-10 09:35:23.115607 | orchestrator | Monday 10 February 2025 09:33:46 +0000 (0:00:00.988) 0:00:42.468 ******* 2025-02-10 09:35:23.115612 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:35:23.115617 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:35:23.115626 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:35:23.115635 | orchestrator | 2025-02-10 09:35:23.115644 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-02-10 09:35:23.115653 | orchestrator | Monday 10 February 2025 09:33:46 +0000 (0:00:00.409) 0:00:42.878 ******* 2025-02-10 09:35:23.115666 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-02-10 09:35:23.115680 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-10 09:35:23.115689 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-10 09:35:23.115697 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-02-10 09:35:23.115706 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-02-10 09:35:23.115715 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-02-10 09:35:23.115724 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-02-10 09:35:23.115733 | orchestrator | 2025-02-10 09:35:23.115742 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-02-10 09:35:23.115750 | orchestrator | Monday 10 February 2025 09:33:47 +0000 (0:00:01.207) 0:00:44.085 ******* 2025-02-10 09:35:23.115759 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-02-10 09:35:23.115768 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-10 09:35:23.115776 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-10 09:35:23.115784 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-02-10 09:35:23.115793 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-02-10 09:35:23.115801 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-02-10 09:35:23.115810 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-02-10 09:35:23.115818 | orchestrator | 2025-02-10 09:35:23.115827 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-02-10 09:35:23.115836 | orchestrator | Monday 10 February 2025 09:33:50 +0000 (0:00:02.313) 0:00:46.398 ******* 2025-02-10 09:35:23.115845 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:35:23.115854 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:35:23.115863 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-02-10 09:35:23.115873 | orchestrator | 2025-02-10 09:35:23.115881 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-02-10 09:35:23.115894 | orchestrator | Monday 10 February 2025 09:33:50 +0000 (0:00:00.702) 0:00:47.101 ******* 2025-02-10 09:35:23.115904 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-02-10 09:35:23.115912 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-02-10 09:35:23.115917 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-02-10 09:35:23.115922 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-02-10 09:35:23.115927 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-02-10 09:35:23.115940 | orchestrator | 2025-02-10 09:35:23.115945 | orchestrator | TASK [generate keys] *********************************************************** 2025-02-10 09:35:23.115950 | orchestrator | Monday 10 February 2025 09:34:31 +0000 (0:00:40.289) 0:01:27.391 ******* 2025-02-10 09:35:23.115954 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:35:23.115959 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:35:23.115964 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:35:23.115968 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:35:23.115973 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:35:23.115978 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:35:23.115983 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-02-10 09:35:23.116011 | orchestrator | 2025-02-10 09:35:23.116016 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-02-10 09:35:23.116020 | orchestrator | Monday 10 February 2025 09:34:52 +0000 (0:00:20.978) 0:01:48.369 ******* 2025-02-10 09:35:23.116025 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:35:23.116030 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:35:23.116035 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:35:23.116043 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:35:23.116048 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:35:23.116053 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:35:23.116058 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-02-10 09:35:23.116062 | orchestrator | 2025-02-10 09:35:23.116067 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-02-10 09:35:23.116072 | orchestrator | Monday 10 February 2025 09:35:02 +0000 (0:00:10.609) 0:01:58.979 ******* 2025-02-10 09:35:23.116077 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:35:23.116081 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-02-10 09:35:23.116086 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-02-10 09:35:23.116091 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:35:23.116095 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-02-10 09:35:23.116100 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-02-10 09:35:23.116105 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:35:23.116110 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-02-10 09:35:23.116114 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-02-10 09:35:23.116119 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:35:23.116124 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-02-10 09:35:23.116132 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-02-10 09:35:26.155511 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:35:26.155614 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-02-10 09:35:26.155622 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-02-10 09:35:26.155628 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:35:26.155661 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-02-10 09:35:26.155667 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-02-10 09:35:26.155674 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-02-10 09:35:26.155680 | orchestrator | 2025-02-10 09:35:26.155686 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:35:26.155694 | orchestrator | testbed-node-3 : ok=30  changed=2  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-02-10 09:35:26.155701 | orchestrator | testbed-node-4 : ok=20  changed=0 unreachable=0 failed=0 skipped=30  rescued=0 ignored=0 2025-02-10 09:35:26.155707 | orchestrator | testbed-node-5 : ok=25  changed=3  unreachable=0 failed=0 skipped=29  rescued=0 ignored=0 2025-02-10 09:35:26.155713 | orchestrator | 2025-02-10 09:35:26.155718 | orchestrator | 2025-02-10 09:35:26.155724 | orchestrator | 2025-02-10 09:35:26.155729 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:35:26.155735 | orchestrator | Monday 10 February 2025 09:35:21 +0000 (0:00:18.818) 0:02:17.798 ******* 2025-02-10 09:35:26.155740 | orchestrator | =============================================================================== 2025-02-10 09:35:26.155746 | orchestrator | create openstack pool(s) ----------------------------------------------- 40.29s 2025-02-10 09:35:26.155751 | orchestrator | generate keys ---------------------------------------------------------- 20.98s 2025-02-10 09:35:26.155757 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.82s 2025-02-10 09:35:26.155762 | orchestrator | get keys from monitors ------------------------------------------------- 10.61s 2025-02-10 09:35:26.155768 | orchestrator | ceph-facts : find a running mon container ------------------------------- 2.51s 2025-02-10 09:35:26.155787 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 2.31s 2025-02-10 09:35:26.155793 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 2.05s 2025-02-10 09:35:26.155798 | orchestrator | ceph-facts : get ceph current status ------------------------------------ 1.71s 2025-02-10 09:35:26.155804 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.60s 2025-02-10 09:35:26.155809 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 1.33s 2025-02-10 09:35:26.155816 | orchestrator | ceph-facts : set_fact rgw_instances without rgw multisite --------------- 1.28s 2025-02-10 09:35:26.155821 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 1.21s 2025-02-10 09:35:26.155827 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 1.08s 2025-02-10 09:35:26.155832 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 1.03s 2025-02-10 09:35:26.155838 | orchestrator | ceph-facts : set_fact rgw_instances_all --------------------------------- 0.99s 2025-02-10 09:35:26.155843 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.98s 2025-02-10 09:35:26.155849 | orchestrator | ceph-facts : include facts.yml ------------------------------------------ 0.92s 2025-02-10 09:35:26.155854 | orchestrator | ceph-facts : set_fact devices generate device list when osd_auto_discovery --- 0.89s 2025-02-10 09:35:26.155860 | orchestrator | ceph-facts : check if the ceph mon socket is in-use --------------------- 0.82s 2025-02-10 09:35:26.155865 | orchestrator | ceph-facts : import_tasks set_radosgw_address.yml ----------------------- 0.81s 2025-02-10 09:35:26.155871 | orchestrator | 2025-02-10 09:35:23 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:26.155889 | orchestrator | 2025-02-10 09:35:26 | INFO  | Task d3250230-bc7c-416e-ae3c-a0b2fb5227a7 is in state STARTED 2025-02-10 09:35:26.156177 | orchestrator | 2025-02-10 09:35:26 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:35:29.206773 | orchestrator | 2025-02-10 09:35:26 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:29.207032 | orchestrator | 2025-02-10 09:35:29 | INFO  | Task d3250230-bc7c-416e-ae3c-a0b2fb5227a7 is in state STARTED 2025-02-10 09:35:29.210234 | orchestrator | 2025-02-10 09:35:29 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:35:32.251842 | orchestrator | 2025-02-10 09:35:29 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:32.252082 | orchestrator | 2025-02-10 09:35:32 | INFO  | Task d3250230-bc7c-416e-ae3c-a0b2fb5227a7 is in state STARTED 2025-02-10 09:35:35.298410 | orchestrator | 2025-02-10 09:35:32 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:35:35.298572 | orchestrator | 2025-02-10 09:35:32 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:35.298613 | orchestrator | 2025-02-10 09:35:35 | INFO  | Task d3250230-bc7c-416e-ae3c-a0b2fb5227a7 is in state STARTED 2025-02-10 09:35:35.298926 | orchestrator | 2025-02-10 09:35:35 | INFO  | Task bbb6262e-7040-448d-8d28-983e845cf62b is in state STARTED 2025-02-10 09:35:35.300573 | orchestrator | 2025-02-10 09:35:35 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:35:38.351198 | orchestrator | 2025-02-10 09:35:35 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:38.351346 | orchestrator | 2025-02-10 09:35:38 | INFO  | Task d3250230-bc7c-416e-ae3c-a0b2fb5227a7 is in state STARTED 2025-02-10 09:35:38.352419 | orchestrator | 2025-02-10 09:35:38 | INFO  | Task bbb6262e-7040-448d-8d28-983e845cf62b is in state STARTED 2025-02-10 09:35:38.352459 | orchestrator | 2025-02-10 09:35:38 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:35:41.406343 | orchestrator | 2025-02-10 09:35:38 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:41.406501 | orchestrator | 2025-02-10 09:35:41 | INFO  | Task d3250230-bc7c-416e-ae3c-a0b2fb5227a7 is in state STARTED 2025-02-10 09:35:41.407462 | orchestrator | 2025-02-10 09:35:41 | INFO  | Task bbb6262e-7040-448d-8d28-983e845cf62b is in state STARTED 2025-02-10 09:35:41.408814 | orchestrator | 2025-02-10 09:35:41 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:35:41.408959 | orchestrator | 2025-02-10 09:35:41 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:44.454374 | orchestrator | 2025-02-10 09:35:44 | INFO  | Task d3250230-bc7c-416e-ae3c-a0b2fb5227a7 is in state STARTED 2025-02-10 09:35:44.454733 | orchestrator | 2025-02-10 09:35:44 | INFO  | Task bbb6262e-7040-448d-8d28-983e845cf62b is in state STARTED 2025-02-10 09:35:44.457032 | orchestrator | 2025-02-10 09:35:44 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:35:47.498887 | orchestrator | 2025-02-10 09:35:44 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:47.499110 | orchestrator | 2025-02-10 09:35:47 | INFO  | Task d3250230-bc7c-416e-ae3c-a0b2fb5227a7 is in state STARTED 2025-02-10 09:35:47.499647 | orchestrator | 2025-02-10 09:35:47 | INFO  | Task bbb6262e-7040-448d-8d28-983e845cf62b is in state STARTED 2025-02-10 09:35:47.501117 | orchestrator | 2025-02-10 09:35:47 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:35:50.539723 | orchestrator | 2025-02-10 09:35:47 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:50.539879 | orchestrator | 2025-02-10 09:35:50 | INFO  | Task d3250230-bc7c-416e-ae3c-a0b2fb5227a7 is in state STARTED 2025-02-10 09:35:50.540609 | orchestrator | 2025-02-10 09:35:50 | INFO  | Task bbb6262e-7040-448d-8d28-983e845cf62b is in state STARTED 2025-02-10 09:35:50.542397 | orchestrator | 2025-02-10 09:35:50 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:35:50.542899 | orchestrator | 2025-02-10 09:35:50 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:53.608327 | orchestrator | 2025-02-10 09:35:53 | INFO  | Task d3250230-bc7c-416e-ae3c-a0b2fb5227a7 is in state STARTED 2025-02-10 09:35:53.608462 | orchestrator | 2025-02-10 09:35:53 | INFO  | Task bbb6262e-7040-448d-8d28-983e845cf62b is in state STARTED 2025-02-10 09:35:53.608481 | orchestrator | 2025-02-10 09:35:53 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:35:53.608503 | orchestrator | 2025-02-10 09:35:53 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:56.656587 | orchestrator | 2025-02-10 09:35:56 | INFO  | Task d3250230-bc7c-416e-ae3c-a0b2fb5227a7 is in state STARTED 2025-02-10 09:35:56.657471 | orchestrator | 2025-02-10 09:35:56 | INFO  | Task bbb6262e-7040-448d-8d28-983e845cf62b is in state STARTED 2025-02-10 09:35:56.660498 | orchestrator | 2025-02-10 09:35:56 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:35:59.707546 | orchestrator | 2025-02-10 09:35:56 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:59.707722 | orchestrator | 2025-02-10 09:35:59 | INFO  | Task d3250230-bc7c-416e-ae3c-a0b2fb5227a7 is in state STARTED 2025-02-10 09:35:59.708022 | orchestrator | 2025-02-10 09:35:59 | INFO  | Task bbb6262e-7040-448d-8d28-983e845cf62b is in state STARTED 2025-02-10 09:35:59.709073 | orchestrator | 2025-02-10 09:35:59 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:36:02.745520 | orchestrator | 2025-02-10 09:35:59 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:36:02.745650 | orchestrator | 2025-02-10 09:36:02 | INFO  | Task d3250230-bc7c-416e-ae3c-a0b2fb5227a7 is in state STARTED 2025-02-10 09:36:02.746832 | orchestrator | 2025-02-10 09:36:02 | INFO  | Task bbb6262e-7040-448d-8d28-983e845cf62b is in state STARTED 2025-02-10 09:36:02.747781 | orchestrator | 2025-02-10 09:36:02 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:36:05.799928 | orchestrator | 2025-02-10 09:36:02 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:36:05.800425 | orchestrator | 2025-02-10 09:36:05 | INFO  | Task d3250230-bc7c-416e-ae3c-a0b2fb5227a7 is in state STARTED 2025-02-10 09:36:05.802088 | orchestrator | 2025-02-10 09:36:05.802190 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-02-10 09:36:05.802208 | orchestrator | 2025-02-10 09:36:05.802223 | orchestrator | PLAY [Apply role fetch-keys] *************************************************** 2025-02-10 09:36:05.802238 | orchestrator | 2025-02-10 09:36:05.802253 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-02-10 09:36:05.802294 | orchestrator | Monday 10 February 2025 09:35:35 +0000 (0:00:00.560) 0:00:00.560 ******* 2025-02-10 09:36:05.802310 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0 2025-02-10 09:36:05.802330 | orchestrator | 2025-02-10 09:36:05.802345 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-02-10 09:36:05.802360 | orchestrator | Monday 10 February 2025 09:35:36 +0000 (0:00:00.223) 0:00:00.784 ******* 2025-02-10 09:36:05.802375 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-02-10 09:36:05.802390 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-02-10 09:36:05.802404 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-02-10 09:36:05.802418 | orchestrator | 2025-02-10 09:36:05.802432 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-02-10 09:36:05.802446 | orchestrator | Monday 10 February 2025 09:35:37 +0000 (0:00:00.958) 0:00:01.742 ******* 2025-02-10 09:36:05.802496 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2025-02-10 09:36:05.802520 | orchestrator | 2025-02-10 09:36:05.802543 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-02-10 09:36:05.802568 | orchestrator | Monday 10 February 2025 09:35:37 +0000 (0:00:00.239) 0:00:01.982 ******* 2025-02-10 09:36:05.802591 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:36:05.802616 | orchestrator | 2025-02-10 09:36:05.802641 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-02-10 09:36:05.802664 | orchestrator | Monday 10 February 2025 09:35:38 +0000 (0:00:00.728) 0:00:02.711 ******* 2025-02-10 09:36:05.802684 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:36:05.802698 | orchestrator | 2025-02-10 09:36:05.802712 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-02-10 09:36:05.802726 | orchestrator | Monday 10 February 2025 09:35:38 +0000 (0:00:00.156) 0:00:02.867 ******* 2025-02-10 09:36:05.802739 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:36:05.802753 | orchestrator | 2025-02-10 09:36:05.802767 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-02-10 09:36:05.802780 | orchestrator | Monday 10 February 2025 09:35:38 +0000 (0:00:00.468) 0:00:03.336 ******* 2025-02-10 09:36:05.802794 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:36:05.802808 | orchestrator | 2025-02-10 09:36:05.802838 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-02-10 09:36:05.802853 | orchestrator | Monday 10 February 2025 09:35:38 +0000 (0:00:00.150) 0:00:03.486 ******* 2025-02-10 09:36:05.802867 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:36:05.802881 | orchestrator | 2025-02-10 09:36:05.802895 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-02-10 09:36:05.802909 | orchestrator | Monday 10 February 2025 09:35:39 +0000 (0:00:00.150) 0:00:03.637 ******* 2025-02-10 09:36:05.802922 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:36:05.802936 | orchestrator | 2025-02-10 09:36:05.802950 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-02-10 09:36:05.802964 | orchestrator | Monday 10 February 2025 09:35:39 +0000 (0:00:00.169) 0:00:03.807 ******* 2025-02-10 09:36:05.802977 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:05.802992 | orchestrator | 2025-02-10 09:36:05.803041 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-02-10 09:36:05.803055 | orchestrator | Monday 10 February 2025 09:35:39 +0000 (0:00:00.168) 0:00:03.975 ******* 2025-02-10 09:36:05.803069 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:36:05.803083 | orchestrator | 2025-02-10 09:36:05.803097 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-02-10 09:36:05.803110 | orchestrator | Monday 10 February 2025 09:35:39 +0000 (0:00:00.339) 0:00:04.315 ******* 2025-02-10 09:36:05.803124 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-02-10 09:36:05.803138 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-10 09:36:05.803152 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-10 09:36:05.803166 | orchestrator | 2025-02-10 09:36:05.803179 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-02-10 09:36:05.803193 | orchestrator | Monday 10 February 2025 09:35:40 +0000 (0:00:00.837) 0:00:05.153 ******* 2025-02-10 09:36:05.803207 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:36:05.803243 | orchestrator | 2025-02-10 09:36:05.803257 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-02-10 09:36:05.803271 | orchestrator | Monday 10 February 2025 09:35:40 +0000 (0:00:00.256) 0:00:05.409 ******* 2025-02-10 09:36:05.803285 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-02-10 09:36:05.803299 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-10 09:36:05.803313 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-10 09:36:05.803339 | orchestrator | 2025-02-10 09:36:05.803353 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-02-10 09:36:05.803367 | orchestrator | Monday 10 February 2025 09:35:42 +0000 (0:00:01.884) 0:00:07.293 ******* 2025-02-10 09:36:05.803380 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-10 09:36:05.803394 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-10 09:36:05.803408 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-10 09:36:05.803422 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:05.803436 | orchestrator | 2025-02-10 09:36:05.803450 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-02-10 09:36:05.803480 | orchestrator | Monday 10 February 2025 09:35:43 +0000 (0:00:00.418) 0:00:07.713 ******* 2025-02-10 09:36:05.803496 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-02-10 09:36:05.803513 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-02-10 09:36:05.803528 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-02-10 09:36:05.803542 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:05.803555 | orchestrator | 2025-02-10 09:36:05.803569 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-02-10 09:36:05.803583 | orchestrator | Monday 10 February 2025 09:35:43 +0000 (0:00:00.728) 0:00:08.441 ******* 2025-02-10 09:36:05.803599 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-02-10 09:36:05.803620 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-02-10 09:36:05.803641 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-02-10 09:36:05.803666 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:05.803690 | orchestrator | 2025-02-10 09:36:05.803714 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-02-10 09:36:05.803738 | orchestrator | Monday 10 February 2025 09:35:44 +0000 (0:00:00.180) 0:00:08.622 ******* 2025-02-10 09:36:05.803767 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': 'ff592b750b12', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-02-10 09:35:41.488587', 'end': '2025-02-10 09:35:41.532769', 'delta': '0:00:00.044182', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['ff592b750b12'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-02-10 09:36:05.803807 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '547f5fa1e985', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-02-10 09:35:42.014537', 'end': '2025-02-10 09:35:42.045168', 'delta': '0:00:00.030631', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['547f5fa1e985'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-02-10 09:36:05.803848 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': 'd5aa104136e6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-02-10 09:35:42.559620', 'end': '2025-02-10 09:35:42.601080', 'delta': '0:00:00.041460', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d5aa104136e6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-02-10 09:36:05.803875 | orchestrator | 2025-02-10 09:36:05.803900 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-02-10 09:36:05.803925 | orchestrator | Monday 10 February 2025 09:35:44 +0000 (0:00:00.204) 0:00:08.826 ******* 2025-02-10 09:36:05.803950 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:36:05.803975 | orchestrator | 2025-02-10 09:36:05.804025 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-02-10 09:36:05.804055 | orchestrator | Monday 10 February 2025 09:35:44 +0000 (0:00:00.241) 0:00:09.068 ******* 2025-02-10 09:36:05.804081 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2025-02-10 09:36:05.804102 | orchestrator | 2025-02-10 09:36:05.804116 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-02-10 09:36:05.804130 | orchestrator | Monday 10 February 2025 09:35:45 +0000 (0:00:01.447) 0:00:10.516 ******* 2025-02-10 09:36:05.804144 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:05.804158 | orchestrator | 2025-02-10 09:36:05.804180 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-02-10 09:36:05.804195 | orchestrator | Monday 10 February 2025 09:35:46 +0000 (0:00:00.178) 0:00:10.695 ******* 2025-02-10 09:36:05.804209 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:05.804223 | orchestrator | 2025-02-10 09:36:05.804236 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-02-10 09:36:05.804250 | orchestrator | Monday 10 February 2025 09:35:46 +0000 (0:00:00.239) 0:00:10.934 ******* 2025-02-10 09:36:05.804264 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:05.804277 | orchestrator | 2025-02-10 09:36:05.804291 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-02-10 09:36:05.804305 | orchestrator | Monday 10 February 2025 09:35:46 +0000 (0:00:00.134) 0:00:11.068 ******* 2025-02-10 09:36:05.804318 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:36:05.804332 | orchestrator | 2025-02-10 09:36:05.804346 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-02-10 09:36:05.804359 | orchestrator | Monday 10 February 2025 09:35:46 +0000 (0:00:00.125) 0:00:11.194 ******* 2025-02-10 09:36:05.804373 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:05.804412 | orchestrator | 2025-02-10 09:36:05.804426 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-02-10 09:36:05.804440 | orchestrator | Monday 10 February 2025 09:35:46 +0000 (0:00:00.207) 0:00:11.402 ******* 2025-02-10 09:36:05.804454 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:05.804467 | orchestrator | 2025-02-10 09:36:05.804481 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-02-10 09:36:05.804495 | orchestrator | Monday 10 February 2025 09:35:46 +0000 (0:00:00.125) 0:00:11.528 ******* 2025-02-10 09:36:05.804509 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:05.804523 | orchestrator | 2025-02-10 09:36:05.804537 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-02-10 09:36:05.804551 | orchestrator | Monday 10 February 2025 09:35:47 +0000 (0:00:00.129) 0:00:11.657 ******* 2025-02-10 09:36:05.804564 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:05.804578 | orchestrator | 2025-02-10 09:36:05.804591 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-02-10 09:36:05.804605 | orchestrator | Monday 10 February 2025 09:35:47 +0000 (0:00:00.114) 0:00:11.772 ******* 2025-02-10 09:36:05.804619 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:05.804632 | orchestrator | 2025-02-10 09:36:05.804649 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-02-10 09:36:05.804672 | orchestrator | Monday 10 February 2025 09:35:47 +0000 (0:00:00.121) 0:00:11.894 ******* 2025-02-10 09:36:05.804688 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:05.804702 | orchestrator | 2025-02-10 09:36:05.804716 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-02-10 09:36:05.804729 | orchestrator | Monday 10 February 2025 09:35:47 +0000 (0:00:00.260) 0:00:12.155 ******* 2025-02-10 09:36:05.804743 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:05.804761 | orchestrator | 2025-02-10 09:36:05.804783 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-02-10 09:36:05.804803 | orchestrator | Monday 10 February 2025 09:35:47 +0000 (0:00:00.119) 0:00:12.274 ******* 2025-02-10 09:36:05.804817 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:05.804830 | orchestrator | 2025-02-10 09:36:05.804844 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-02-10 09:36:05.804858 | orchestrator | Monday 10 February 2025 09:35:47 +0000 (0:00:00.115) 0:00:12.390 ******* 2025-02-10 09:36:05.804872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:36:05.804897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:36:05.804920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:36:05.804935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:36:05.804957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:36:05.804971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:36:05.804990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:36:05.805094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:36:05.805144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c85cd12-85f6-4a1e-a24f-730d9e6d165f', 'scsi-SQEMU_QEMU_HARDDISK_4c85cd12-85f6-4a1e-a24f-730d9e6d165f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c85cd12-85f6-4a1e-a24f-730d9e6d165f-part1', 'scsi-SQEMU_QEMU_HARDDISK_4c85cd12-85f6-4a1e-a24f-730d9e6d165f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c85cd12-85f6-4a1e-a24f-730d9e6d165f-part14', 'scsi-SQEMU_QEMU_HARDDISK_4c85cd12-85f6-4a1e-a24f-730d9e6d165f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c85cd12-85f6-4a1e-a24f-730d9e6d165f-part15', 'scsi-SQEMU_QEMU_HARDDISK_4c85cd12-85f6-4a1e-a24f-730d9e6d165f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c85cd12-85f6-4a1e-a24f-730d9e6d165f-part16', 'scsi-SQEMU_QEMU_HARDDISK_4c85cd12-85f6-4a1e-a24f-730d9e6d165f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:36:05.805198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ec8f61f-9e5b-49cd-9e82-40bf07cffc70', 'scsi-SQEMU_QEMU_HARDDISK_4ec8f61f-9e5b-49cd-9e82-40bf07cffc70'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:36:05.805216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96415da1-6a76-4477-bfa7-f065f33f8e6a', 'scsi-SQEMU_QEMU_HARDDISK_96415da1-6a76-4477-bfa7-f065f33f8e6a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:36:05.805231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c1ae1e45-2170-46e6-8462-912ee8672daa', 'scsi-SQEMU_QEMU_HARDDISK_c1ae1e45-2170-46e6-8462-912ee8672daa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:36:05.805246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-02-10-08-33-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:36:05.805262 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:05.805276 | orchestrator | 2025-02-10 09:36:05.805290 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-02-10 09:36:05.805304 | orchestrator | Monday 10 February 2025 09:35:48 +0000 (0:00:00.313) 0:00:12.703 ******* 2025-02-10 09:36:05.805318 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:05.805332 | orchestrator | 2025-02-10 09:36:05.805346 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-02-10 09:36:05.805359 | orchestrator | Monday 10 February 2025 09:35:48 +0000 (0:00:00.257) 0:00:12.960 ******* 2025-02-10 09:36:05.805373 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:05.805392 | orchestrator | 2025-02-10 09:36:05.805406 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-02-10 09:36:05.805419 | orchestrator | Monday 10 February 2025 09:35:48 +0000 (0:00:00.134) 0:00:13.094 ******* 2025-02-10 09:36:05.805433 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:05.805447 | orchestrator | 2025-02-10 09:36:05.805460 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-02-10 09:36:05.805479 | orchestrator | Monday 10 February 2025 09:35:48 +0000 (0:00:00.155) 0:00:13.249 ******* 2025-02-10 09:36:05.805499 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:36:05.805514 | orchestrator | 2025-02-10 09:36:05.805538 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-02-10 09:36:05.805554 | orchestrator | Monday 10 February 2025 09:35:49 +0000 (0:00:00.473) 0:00:13.723 ******* 2025-02-10 09:36:05.805575 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:36:05.805588 | orchestrator | 2025-02-10 09:36:05.805601 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-02-10 09:36:05.805613 | orchestrator | Monday 10 February 2025 09:35:49 +0000 (0:00:00.130) 0:00:13.853 ******* 2025-02-10 09:36:05.805625 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:36:05.805637 | orchestrator | 2025-02-10 09:36:05.805650 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-02-10 09:36:05.805662 | orchestrator | Monday 10 February 2025 09:35:49 +0000 (0:00:00.425) 0:00:14.279 ******* 2025-02-10 09:36:05.805674 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:36:05.805686 | orchestrator | 2025-02-10 09:36:05.805699 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-02-10 09:36:05.805711 | orchestrator | Monday 10 February 2025 09:35:49 +0000 (0:00:00.140) 0:00:14.420 ******* 2025-02-10 09:36:05.805723 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:05.805735 | orchestrator | 2025-02-10 09:36:05.805747 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-02-10 09:36:05.805760 | orchestrator | Monday 10 February 2025 09:35:50 +0000 (0:00:00.553) 0:00:14.973 ******* 2025-02-10 09:36:05.805772 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:05.805784 | orchestrator | 2025-02-10 09:36:05.805796 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-02-10 09:36:05.805808 | orchestrator | Monday 10 February 2025 09:35:50 +0000 (0:00:00.141) 0:00:15.115 ******* 2025-02-10 09:36:05.805820 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-10 09:36:05.805833 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-10 09:36:05.805845 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-10 09:36:05.805857 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:05.805869 | orchestrator | 2025-02-10 09:36:05.805882 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-02-10 09:36:05.805894 | orchestrator | Monday 10 February 2025 09:35:51 +0000 (0:00:00.576) 0:00:15.692 ******* 2025-02-10 09:36:05.805906 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-10 09:36:05.805918 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-10 09:36:05.805930 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-10 09:36:05.805942 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:05.805955 | orchestrator | 2025-02-10 09:36:05.805967 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-02-10 09:36:05.805979 | orchestrator | Monday 10 February 2025 09:35:51 +0000 (0:00:00.491) 0:00:16.183 ******* 2025-02-10 09:36:05.805991 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-02-10 09:36:05.806059 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-02-10 09:36:05.806072 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-02-10 09:36:05.806085 | orchestrator | 2025-02-10 09:36:05.806097 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-02-10 09:36:05.806109 | orchestrator | Monday 10 February 2025 09:35:52 +0000 (0:00:01.117) 0:00:17.300 ******* 2025-02-10 09:36:05.806122 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-10 09:36:05.806135 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-10 09:36:05.806157 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-10 09:36:05.806178 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:05.806207 | orchestrator | 2025-02-10 09:36:05.806228 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-02-10 09:36:05.806250 | orchestrator | Monday 10 February 2025 09:35:52 +0000 (0:00:00.228) 0:00:17.529 ******* 2025-02-10 09:36:05.806273 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-10 09:36:05.806296 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-10 09:36:05.806328 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-10 09:36:05.806348 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:05.806370 | orchestrator | 2025-02-10 09:36:05.806391 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-02-10 09:36:05.806414 | orchestrator | Monday 10 February 2025 09:35:53 +0000 (0:00:00.223) 0:00:17.753 ******* 2025-02-10 09:36:05.806435 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-02-10 09:36:05.806451 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-02-10 09:36:05.806464 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-02-10 09:36:05.806476 | orchestrator | 2025-02-10 09:36:05.806488 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-02-10 09:36:05.806501 | orchestrator | Monday 10 February 2025 09:35:53 +0000 (0:00:00.187) 0:00:17.940 ******* 2025-02-10 09:36:05.806513 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:05.806526 | orchestrator | 2025-02-10 09:36:05.806538 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-02-10 09:36:05.806550 | orchestrator | Monday 10 February 2025 09:35:53 +0000 (0:00:00.114) 0:00:18.054 ******* 2025-02-10 09:36:05.806562 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:05.806574 | orchestrator | 2025-02-10 09:36:05.806586 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-02-10 09:36:05.806598 | orchestrator | Monday 10 February 2025 09:35:53 +0000 (0:00:00.251) 0:00:18.305 ******* 2025-02-10 09:36:05.806610 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-02-10 09:36:05.806631 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-10 09:36:05.806644 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-10 09:36:05.806662 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-02-10 09:36:05.806675 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-02-10 09:36:05.806687 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-02-10 09:36:05.806699 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-02-10 09:36:05.806711 | orchestrator | 2025-02-10 09:36:05.806723 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-02-10 09:36:05.806736 | orchestrator | Monday 10 February 2025 09:35:54 +0000 (0:00:00.839) 0:00:19.145 ******* 2025-02-10 09:36:05.806748 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-02-10 09:36:05.806761 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-10 09:36:05.806773 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-10 09:36:05.806785 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-02-10 09:36:05.806797 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-02-10 09:36:05.806809 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-02-10 09:36:05.806821 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-02-10 09:36:05.806834 | orchestrator | 2025-02-10 09:36:05.806846 | orchestrator | TASK [ceph-fetch-keys : lookup keys in /etc/ceph] ****************************** 2025-02-10 09:36:05.806858 | orchestrator | Monday 10 February 2025 09:35:56 +0000 (0:00:01.884) 0:00:21.030 ******* 2025-02-10 09:36:05.806870 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:36:05.806882 | orchestrator | 2025-02-10 09:36:05.806895 | orchestrator | TASK [ceph-fetch-keys : create a local fetch directory if it does not exist] *** 2025-02-10 09:36:05.806907 | orchestrator | Monday 10 February 2025 09:35:56 +0000 (0:00:00.502) 0:00:21.532 ******* 2025-02-10 09:36:05.806927 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-10 09:36:05.806940 | orchestrator | 2025-02-10 09:36:05.806952 | orchestrator | TASK [ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/] *** 2025-02-10 09:36:05.806965 | orchestrator | Monday 10 February 2025 09:35:57 +0000 (0:00:00.663) 0:00:22.196 ******* 2025-02-10 09:36:05.806977 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.admin.keyring) 2025-02-10 09:36:05.806989 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder-backup.keyring) 2025-02-10 09:36:05.807060 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder.keyring) 2025-02-10 09:36:05.807075 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.crash.keyring) 2025-02-10 09:36:05.807095 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.glance.keyring) 2025-02-10 09:36:05.807115 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.gnocchi.keyring) 2025-02-10 09:36:05.807135 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.manila.keyring) 2025-02-10 09:36:05.807154 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.nova.keyring) 2025-02-10 09:36:05.807176 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-0.keyring) 2025-02-10 09:36:05.807196 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-1.keyring) 2025-02-10 09:36:05.807218 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-2.keyring) 2025-02-10 09:36:05.807239 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mon.keyring) 2025-02-10 09:36:05.807256 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) 2025-02-10 09:36:05.807268 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) 2025-02-10 09:36:05.807281 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) 2025-02-10 09:36:05.807293 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) 2025-02-10 09:36:05.807312 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr/ceph.keyring) 2025-02-10 09:36:05.807324 | orchestrator | 2025-02-10 09:36:05.807337 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:36:05.807349 | orchestrator | testbed-node-0 : ok=28  changed=3  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-02-10 09:36:05.807363 | orchestrator | 2025-02-10 09:36:05.807375 | orchestrator | 2025-02-10 09:36:05.807387 | orchestrator | 2025-02-10 09:36:05.807399 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:36:05.807412 | orchestrator | Monday 10 February 2025 09:36:04 +0000 (0:00:06.651) 0:00:28.847 ******* 2025-02-10 09:36:05.807425 | orchestrator | =============================================================================== 2025-02-10 09:36:05.807437 | orchestrator | ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/ --- 6.65s 2025-02-10 09:36:05.807450 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.89s 2025-02-10 09:36:05.807463 | orchestrator | ceph-facts : find a running mon container ------------------------------- 1.88s 2025-02-10 09:36:05.807483 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.45s 2025-02-10 09:36:08.850655 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.12s 2025-02-10 09:36:08.850808 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.96s 2025-02-10 09:36:08.850828 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 0.84s 2025-02-10 09:36:08.850844 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.84s 2025-02-10 09:36:08.850858 | orchestrator | ceph-facts : check if it is atomic host --------------------------------- 0.73s 2025-02-10 09:36:08.850906 | orchestrator | ceph-facts : check if the ceph mon socket is in-use --------------------- 0.73s 2025-02-10 09:36:08.850921 | orchestrator | ceph-fetch-keys : create a local fetch directory if it does not exist --- 0.66s 2025-02-10 09:36:08.850935 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 0.58s 2025-02-10 09:36:08.850949 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.55s 2025-02-10 09:36:08.850963 | orchestrator | ceph-fetch-keys : lookup keys in /etc/ceph ------------------------------ 0.50s 2025-02-10 09:36:08.850977 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.49s 2025-02-10 09:36:08.850991 | orchestrator | ceph-facts : check if the ceph conf exists ------------------------------ 0.47s 2025-02-10 09:36:08.851063 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.47s 2025-02-10 09:36:08.851079 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.43s 2025-02-10 09:36:08.851092 | orchestrator | ceph-facts : check for a ceph mon socket -------------------------------- 0.42s 2025-02-10 09:36:08.851106 | orchestrator | ceph-facts : set_fact ceph_release ceph_stable_release ------------------ 0.34s 2025-02-10 09:36:08.851123 | orchestrator | 2025-02-10 09:36:05 | INFO  | Task bbb6262e-7040-448d-8d28-983e845cf62b is in state SUCCESS 2025-02-10 09:36:08.851138 | orchestrator | 2025-02-10 09:36:05 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:36:08.851152 | orchestrator | 2025-02-10 09:36:05 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:36:08.851188 | orchestrator | 2025-02-10 09:36:08 | INFO  | Task d3250230-bc7c-416e-ae3c-a0b2fb5227a7 is in state SUCCESS 2025-02-10 09:36:08.851715 | orchestrator | 2025-02-10 09:36:08 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:36:11.906129 | orchestrator | 2025-02-10 09:36:08 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:36:11.906310 | orchestrator | 2025-02-10 09:36:11 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state STARTED 2025-02-10 09:36:14.943324 | orchestrator | 2025-02-10 09:36:11 | INFO  | Task 3ac52321-fc6f-48a6-846d-cc3532c274e5 is in state STARTED 2025-02-10 09:36:14.943693 | orchestrator | 2025-02-10 09:36:11 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:36:14.943745 | orchestrator | 2025-02-10 09:36:14 | INFO  | Task 64b2ff07-7227-444b-8be2-1ea556056a3c is in state SUCCESS 2025-02-10 09:36:14.943762 | orchestrator | 2025-02-10 09:36:14.943777 | orchestrator | 2025-02-10 09:36:14.943792 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-02-10 09:36:14.943806 | orchestrator | 2025-02-10 09:36:14.943820 | orchestrator | TASK [Check ceph keys] ********************************************************* 2025-02-10 09:36:14.943834 | orchestrator | Monday 10 February 2025 09:35:25 +0000 (0:00:00.192) 0:00:00.192 ******* 2025-02-10 09:36:14.943849 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-02-10 09:36:14.943862 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-02-10 09:36:14.943876 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-02-10 09:36:14.943890 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-02-10 09:36:14.943904 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-02-10 09:36:14.943937 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-02-10 09:36:14.943952 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-02-10 09:36:14.943965 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-02-10 09:36:14.943979 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-02-10 09:36:14.944052 | orchestrator | 2025-02-10 09:36:14.944068 | orchestrator | TASK [Set _fetch_ceph_keys fact] *********************************************** 2025-02-10 09:36:14.944082 | orchestrator | Monday 10 February 2025 09:35:28 +0000 (0:00:03.516) 0:00:03.709 ******* 2025-02-10 09:36:14.944096 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-02-10 09:36:14.944110 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-02-10 09:36:14.944124 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-02-10 09:36:14.944137 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-02-10 09:36:14.944152 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-02-10 09:36:14.944166 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-02-10 09:36:14.944180 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-02-10 09:36:14.944193 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-02-10 09:36:14.944207 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-02-10 09:36:14.944220 | orchestrator | 2025-02-10 09:36:14.944235 | orchestrator | TASK [Point out that the following task takes some time and does not give any output] *** 2025-02-10 09:36:14.944251 | orchestrator | Monday 10 February 2025 09:35:29 +0000 (0:00:00.265) 0:00:03.975 ******* 2025-02-10 09:36:14.944282 | orchestrator | ok: [testbed-manager] => { 2025-02-10 09:36:14.944301 | orchestrator |  "msg": "The task 'Fetch ceph keys from the first monitor node' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete." 2025-02-10 09:36:14.944319 | orchestrator | } 2025-02-10 09:36:14.944335 | orchestrator | 2025-02-10 09:36:14.944363 | orchestrator | TASK [Fetch ceph keys from the first monitor node] ***************************** 2025-02-10 09:36:14.944380 | orchestrator | Monday 10 February 2025 09:35:29 +0000 (0:00:00.179) 0:00:04.155 ******* 2025-02-10 09:36:14.944396 | orchestrator | changed: [testbed-manager] 2025-02-10 09:36:14.944412 | orchestrator | 2025-02-10 09:36:14.944427 | orchestrator | TASK [Copy ceph infrastructure keys to the configuration repository] *********** 2025-02-10 09:36:14.944453 | orchestrator | Monday 10 February 2025 09:36:05 +0000 (0:00:35.661) 0:00:39.816 ******* 2025-02-10 09:36:14.944470 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.admin.keyring', 'dest': '/opt/configuration/environments/infrastructure/files/ceph/ceph.client.admin.keyring'}) 2025-02-10 09:36:14.944486 | orchestrator | 2025-02-10 09:36:14.944502 | orchestrator | TASK [Copy ceph kolla keys to the configuration repository] ******************** 2025-02-10 09:36:14.944517 | orchestrator | Monday 10 February 2025 09:36:05 +0000 (0:00:00.476) 0:00:40.293 ******* 2025-02-10 09:36:14.944535 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume/ceph.client.cinder.keyring'}) 2025-02-10 09:36:14.944552 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder.keyring'}) 2025-02-10 09:36:14.944568 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder-backup.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder-backup.keyring'}) 2025-02-10 09:36:14.944584 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.cinder.keyring'}) 2025-02-10 09:36:14.944610 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.nova.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.nova.keyring'}) 2025-02-10 09:36:14.947189 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.glance.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/glance/ceph.client.glance.keyring'}) 2025-02-10 09:36:14.947277 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.gnocchi.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/gnocchi/ceph.client.gnocchi.keyring'}) 2025-02-10 09:36:14.947296 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.manila.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/manila/ceph.client.manila.keyring'}) 2025-02-10 09:36:14.947310 | orchestrator | 2025-02-10 09:36:14.947324 | orchestrator | TASK [Copy ceph custom keys to the configuration repository] ******************* 2025-02-10 09:36:14.947338 | orchestrator | Monday 10 February 2025 09:36:08 +0000 (0:00:02.821) 0:00:43.115 ******* 2025-02-10 09:36:14.947352 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:36:14.947367 | orchestrator | 2025-02-10 09:36:14.947380 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:36:14.947395 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-10 09:36:14.947410 | orchestrator | 2025-02-10 09:36:14.947423 | orchestrator | Monday 10 February 2025 09:36:08 +0000 (0:00:00.030) 0:00:43.145 ******* 2025-02-10 09:36:14.947437 | orchestrator | =============================================================================== 2025-02-10 09:36:14.947450 | orchestrator | Fetch ceph keys from the first monitor node ---------------------------- 35.66s 2025-02-10 09:36:14.947872 | orchestrator | Check ceph keys --------------------------------------------------------- 3.52s 2025-02-10 09:36:14.947890 | orchestrator | Copy ceph kolla keys to the configuration repository -------------------- 2.82s 2025-02-10 09:36:14.947913 | orchestrator | Copy ceph infrastructure keys to the configuration repository ----------- 0.48s 2025-02-10 09:36:14.947928 | orchestrator | Set _fetch_ceph_keys fact ----------------------------------------------- 0.27s 2025-02-10 09:36:14.947942 | orchestrator | Point out that the following task takes some time and does not give any output --- 0.18s 2025-02-10 09:36:14.947957 | orchestrator | Copy ceph custom keys to the configuration repository ------------------- 0.03s 2025-02-10 09:36:14.947971 | orchestrator | 2025-02-10 09:36:14.948052 | orchestrator | 2025-02-10 09:36:14.948072 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:36:14.948086 | orchestrator | 2025-02-10 09:36:14.948100 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:36:14.948114 | orchestrator | Monday 10 February 2025 09:33:23 +0000 (0:00:00.366) 0:00:00.366 ******* 2025-02-10 09:36:14.948128 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:36:14.948143 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:36:14.948156 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:36:14.948170 | orchestrator | 2025-02-10 09:36:14.948184 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:36:14.948198 | orchestrator | Monday 10 February 2025 09:33:24 +0000 (0:00:00.479) 0:00:00.846 ******* 2025-02-10 09:36:14.948211 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-02-10 09:36:14.948225 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-02-10 09:36:14.948239 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-02-10 09:36:14.948253 | orchestrator | 2025-02-10 09:36:14.948267 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-02-10 09:36:14.948280 | orchestrator | 2025-02-10 09:36:14.948294 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-02-10 09:36:14.948308 | orchestrator | Monday 10 February 2025 09:33:24 +0000 (0:00:00.404) 0:00:01.250 ******* 2025-02-10 09:36:14.948322 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:36:14.948336 | orchestrator | 2025-02-10 09:36:14.948349 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-02-10 09:36:14.948363 | orchestrator | Monday 10 February 2025 09:33:25 +0000 (0:00:01.144) 0:00:02.394 ******* 2025-02-10 09:36:14.948392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-02-10 09:36:14.948410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-02-10 09:36:14.948469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-02-10 09:36:14.948489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-10 09:36:14.948508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-10 09:36:14.948532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-10 09:36:14.948549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-10 09:36:14.948566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-10 09:36:14.948583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-10 09:36:14.948598 | orchestrator | 2025-02-10 09:36:14.948620 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-02-10 09:36:14.948637 | orchestrator | Monday 10 February 2025 09:33:28 +0000 (0:00:02.548) 0:00:04.943 ******* 2025-02-10 09:36:14.948652 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-02-10 09:36:14.948666 | orchestrator | 2025-02-10 09:36:14.948680 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-02-10 09:36:14.948694 | orchestrator | Monday 10 February 2025 09:33:28 +0000 (0:00:00.586) 0:00:05.529 ******* 2025-02-10 09:36:14.948708 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:36:14.948722 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:36:14.948736 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:36:14.948750 | orchestrator | 2025-02-10 09:36:14.948764 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-02-10 09:36:14.948784 | orchestrator | Monday 10 February 2025 09:33:29 +0000 (0:00:00.488) 0:00:06.018 ******* 2025-02-10 09:36:14.948798 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-10 09:36:14.948812 | orchestrator | 2025-02-10 09:36:14.948826 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-02-10 09:36:14.948840 | orchestrator | Monday 10 February 2025 09:33:29 +0000 (0:00:00.445) 0:00:06.464 ******* 2025-02-10 09:36:14.948853 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:36:14.948867 | orchestrator | 2025-02-10 09:36:14.948881 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-02-10 09:36:14.948895 | orchestrator | Monday 10 February 2025 09:33:30 +0000 (0:00:00.776) 0:00:07.240 ******* 2025-02-10 09:36:14.948914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-02-10 09:36:14.948930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-02-10 09:36:14.948954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-02-10 09:36:14.948969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-10 09:36:14.949058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-10 09:36:14.949087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-10 09:36:14.949107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-10 09:36:14.949122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-10 09:36:14.949136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-10 09:36:14.949151 | orchestrator | 2025-02-10 09:36:14.949164 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-02-10 09:36:14.949194 | orchestrator | Monday 10 February 2025 09:33:34 +0000 (0:00:03.952) 0:00:11.193 ******* 2025-02-10 09:36:14.949210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-02-10 09:36:14.949231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-10 09:36:14.949246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-10 09:36:14.949260 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:14.949274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-02-10 09:36:14.949295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-10 09:36:14.949323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-10 09:36:14.949337 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:36:14.949352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-02-10 09:36:14.949366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-10 09:36:14.949380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-10 09:36:14.949395 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:36:14.949408 | orchestrator | 2025-02-10 09:36:14.949422 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-02-10 09:36:14.949436 | orchestrator | Monday 10 February 2025 09:33:35 +0000 (0:00:01.404) 0:00:12.598 ******* 2025-02-10 09:36:14.949457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-02-10 09:36:14.949484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-10 09:36:14.949499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-10 09:36:14.949514 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:14.949528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-02-10 09:36:14.949543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-10 09:36:14.949557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-10 09:36:14.949578 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:36:14.949602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-02-10 09:36:14.949618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-10 09:36:14.949632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-10 09:36:14.949646 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:36:14.949660 | orchestrator | 2025-02-10 09:36:14.949674 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-02-10 09:36:14.949688 | orchestrator | Monday 10 February 2025 09:33:37 +0000 (0:00:01.346) 0:00:13.944 ******* 2025-02-10 09:36:14.949702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-02-10 09:36:14.949730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-02-10 09:36:14.949784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-02-10 09:36:14.949799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-10 09:36:14.949814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-10 09:36:14.949829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-10 09:36:14.949851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-10 09:36:14.949874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-10 09:36:14.949900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-10 09:36:14.949915 | orchestrator | 2025-02-10 09:36:14.949930 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-02-10 09:36:14.949944 | orchestrator | Monday 10 February 2025 09:33:40 +0000 (0:00:03.667) 0:00:17.611 ******* 2025-02-10 09:36:14.949958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-02-10 09:36:14.949972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-10 09:36:14.949993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-02-10 09:36:14.950168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-10 09:36:14.950189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-02-10 09:36:14.950203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-10 09:36:14.950218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-10 09:36:14.950242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-10 09:36:14.950256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-10 09:36:14.950270 | orchestrator | 2025-02-10 09:36:14.950284 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-02-10 09:36:14.950304 | orchestrator | Monday 10 February 2025 09:33:49 +0000 (0:00:08.604) 0:00:26.215 ******* 2025-02-10 09:36:14.950318 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:36:14.950332 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:36:14.950346 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:36:14.950360 | orchestrator | 2025-02-10 09:36:14.950374 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-02-10 09:36:14.950389 | orchestrator | Monday 10 February 2025 09:33:52 +0000 (0:00:02.744) 0:00:28.960 ******* 2025-02-10 09:36:14.950403 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:14.950416 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:36:14.950430 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:36:14.950443 | orchestrator | 2025-02-10 09:36:14.950457 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-02-10 09:36:14.950471 | orchestrator | Monday 10 February 2025 09:33:53 +0000 (0:00:01.401) 0:00:30.361 ******* 2025-02-10 09:36:14.950485 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:14.950498 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:36:14.950512 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:36:14.950526 | orchestrator | 2025-02-10 09:36:14.950540 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-02-10 09:36:14.950554 | orchestrator | Monday 10 February 2025 09:33:54 +0000 (0:00:00.540) 0:00:30.902 ******* 2025-02-10 09:36:14.950567 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:14.950581 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:36:14.950593 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:36:14.950605 | orchestrator | 2025-02-10 09:36:14.950617 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-02-10 09:36:14.950630 | orchestrator | Monday 10 February 2025 09:33:54 +0000 (0:00:00.485) 0:00:31.387 ******* 2025-02-10 09:36:14.950652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-02-10 09:36:14.950672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-10 09:36:14.950685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-02-10 09:36:14.950705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-10 09:36:14.950726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-02-10 09:36:14.950740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-10 09:36:14.950759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-10 09:36:14.950772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-10 09:36:14.950784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-10 09:36:14.950797 | orchestrator | 2025-02-10 09:36:14.950810 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-02-10 09:36:14.950827 | orchestrator | Monday 10 February 2025 09:33:57 +0000 (0:00:03.061) 0:00:34.449 ******* 2025-02-10 09:36:14.950839 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:14.950852 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:36:14.950864 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:36:14.950876 | orchestrator | 2025-02-10 09:36:14.950888 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-02-10 09:36:14.950901 | orchestrator | Monday 10 February 2025 09:33:58 +0000 (0:00:00.495) 0:00:34.945 ******* 2025-02-10 09:36:14.950913 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-02-10 09:36:14.950926 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-02-10 09:36:14.950938 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-02-10 09:36:14.950951 | orchestrator | 2025-02-10 09:36:14.950963 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-02-10 09:36:14.950975 | orchestrator | Monday 10 February 2025 09:34:00 +0000 (0:00:02.488) 0:00:37.433 ******* 2025-02-10 09:36:14.950988 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-10 09:36:14.951022 | orchestrator | 2025-02-10 09:36:14.951038 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-02-10 09:36:14.951050 | orchestrator | Monday 10 February 2025 09:34:01 +0000 (0:00:00.772) 0:00:38.206 ******* 2025-02-10 09:36:14.951069 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:36:14.951082 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:14.951094 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:36:14.951106 | orchestrator | 2025-02-10 09:36:14.951118 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-02-10 09:36:14.951130 | orchestrator | Monday 10 February 2025 09:34:03 +0000 (0:00:01.731) 0:00:39.937 ******* 2025-02-10 09:36:14.951143 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-02-10 09:36:14.951161 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-02-10 09:36:14.951173 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-10 09:36:14.951185 | orchestrator | 2025-02-10 09:36:14.951198 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-02-10 09:36:14.951210 | orchestrator | Monday 10 February 2025 09:34:04 +0000 (0:00:01.356) 0:00:41.293 ******* 2025-02-10 09:36:14.951222 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:36:14.951235 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:36:14.951247 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:36:14.951259 | orchestrator | 2025-02-10 09:36:14.951271 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-02-10 09:36:14.951283 | orchestrator | Monday 10 February 2025 09:34:05 +0000 (0:00:00.512) 0:00:41.805 ******* 2025-02-10 09:36:14.951295 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-02-10 09:36:14.951307 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-02-10 09:36:14.951320 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-02-10 09:36:14.951332 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-02-10 09:36:14.951344 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-02-10 09:36:14.951356 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-02-10 09:36:14.951369 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-02-10 09:36:14.951381 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-02-10 09:36:14.951393 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-02-10 09:36:14.951405 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-02-10 09:36:14.951417 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-02-10 09:36:14.951429 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-02-10 09:36:14.951442 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-02-10 09:36:14.951454 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-02-10 09:36:14.951466 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-02-10 09:36:14.951478 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-02-10 09:36:14.951490 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-02-10 09:36:14.951502 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-02-10 09:36:14.951515 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-02-10 09:36:14.951527 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-02-10 09:36:14.951543 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-02-10 09:36:14.951562 | orchestrator | 2025-02-10 09:36:14.951575 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-02-10 09:36:14.951592 | orchestrator | Monday 10 February 2025 09:34:20 +0000 (0:00:15.250) 0:00:57.056 ******* 2025-02-10 09:36:14.951605 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-02-10 09:36:14.951617 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-02-10 09:36:14.951630 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-02-10 09:36:14.951642 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-02-10 09:36:14.951654 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-02-10 09:36:14.951666 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-02-10 09:36:14.951678 | orchestrator | 2025-02-10 09:36:14.951690 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-02-10 09:36:14.951703 | orchestrator | Monday 10 February 2025 09:34:24 +0000 (0:00:03.789) 0:01:00.846 ******* 2025-02-10 09:36:14.951715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-02-10 09:36:14.951739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-02-10 09:36:14.951752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-02-10 09:36:14.951784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-10 09:36:14.951798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-10 09:36:14.951810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-10 09:36:14.951831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-10 09:36:14.951845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-10 09:36:14.951857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-10 09:36:14.951876 | orchestrator | 2025-02-10 09:36:14.951889 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-02-10 09:36:14.951901 | orchestrator | Monday 10 February 2025 09:34:27 +0000 (0:00:03.410) 0:01:04.256 ******* 2025-02-10 09:36:14.951913 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:14.951925 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:36:14.951937 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:36:14.951949 | orchestrator | 2025-02-10 09:36:14.951961 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-02-10 09:36:14.951973 | orchestrator | Monday 10 February 2025 09:34:27 +0000 (0:00:00.303) 0:01:04.560 ******* 2025-02-10 09:36:14.951986 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:36:14.951998 | orchestrator | 2025-02-10 09:36:14.952036 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-02-10 09:36:14.952049 | orchestrator | Monday 10 February 2025 09:34:30 +0000 (0:00:02.734) 0:01:07.294 ******* 2025-02-10 09:36:14.952061 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:36:14.952073 | orchestrator | 2025-02-10 09:36:14.952085 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-02-10 09:36:14.952098 | orchestrator | Monday 10 February 2025 09:34:32 +0000 (0:00:02.256) 0:01:09.551 ******* 2025-02-10 09:36:14.952110 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:36:14.952122 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:36:14.952134 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:36:14.952147 | orchestrator | 2025-02-10 09:36:14.952159 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-02-10 09:36:14.952171 | orchestrator | Monday 10 February 2025 09:34:34 +0000 (0:00:01.160) 0:01:10.711 ******* 2025-02-10 09:36:14.952183 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:36:14.952195 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:36:14.952207 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:36:14.952219 | orchestrator | 2025-02-10 09:36:14.952231 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-02-10 09:36:14.952243 | orchestrator | Monday 10 February 2025 09:34:34 +0000 (0:00:00.385) 0:01:11.097 ******* 2025-02-10 09:36:14.952255 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:14.952267 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:36:14.952280 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:36:14.952292 | orchestrator | 2025-02-10 09:36:14.952304 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-02-10 09:36:14.952316 | orchestrator | Monday 10 February 2025 09:34:35 +0000 (0:00:00.791) 0:01:11.888 ******* 2025-02-10 09:36:14.952327 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:36:14.952340 | orchestrator | 2025-02-10 09:36:14.952352 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-02-10 09:36:14.952364 | orchestrator | Monday 10 February 2025 09:34:49 +0000 (0:00:14.189) 0:01:26.078 ******* 2025-02-10 09:36:14.952375 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:36:14.952387 | orchestrator | 2025-02-10 09:36:14.952399 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-02-10 09:36:14.952411 | orchestrator | Monday 10 February 2025 09:34:58 +0000 (0:00:08.632) 0:01:34.710 ******* 2025-02-10 09:36:14.952423 | orchestrator | 2025-02-10 09:36:14.952435 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-02-10 09:36:14.952447 | orchestrator | Monday 10 February 2025 09:34:58 +0000 (0:00:00.240) 0:01:34.951 ******* 2025-02-10 09:36:14.952459 | orchestrator | 2025-02-10 09:36:14.952471 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-02-10 09:36:14.952483 | orchestrator | Monday 10 February 2025 09:34:58 +0000 (0:00:00.068) 0:01:35.019 ******* 2025-02-10 09:36:14.952495 | orchestrator | 2025-02-10 09:36:14.952507 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-02-10 09:36:14.952526 | orchestrator | Monday 10 February 2025 09:34:58 +0000 (0:00:00.069) 0:01:35.088 ******* 2025-02-10 09:36:14.952538 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:36:14.952550 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:36:14.952562 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:36:14.952574 | orchestrator | 2025-02-10 09:36:14.952595 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-02-10 09:36:14.952607 | orchestrator | Monday 10 February 2025 09:35:12 +0000 (0:00:14.255) 0:01:49.343 ******* 2025-02-10 09:36:14.952619 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:36:14.952631 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:36:14.952644 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:36:14.952656 | orchestrator | 2025-02-10 09:36:14.952668 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-02-10 09:36:14.952680 | orchestrator | Monday 10 February 2025 09:35:22 +0000 (0:00:10.045) 0:01:59.389 ******* 2025-02-10 09:36:14.952692 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:36:14.952704 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:36:14.952716 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:36:14.952728 | orchestrator | 2025-02-10 09:36:14.952740 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-02-10 09:36:14.952752 | orchestrator | Monday 10 February 2025 09:35:28 +0000 (0:00:05.367) 0:02:04.757 ******* 2025-02-10 09:36:14.952977 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:36:14.952991 | orchestrator | 2025-02-10 09:36:14.953153 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-02-10 09:36:14.953192 | orchestrator | Monday 10 February 2025 09:35:29 +0000 (0:00:01.024) 0:02:05.781 ******* 2025-02-10 09:36:14.953204 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:36:14.953217 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:36:14.953229 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:36:14.953241 | orchestrator | 2025-02-10 09:36:14.953253 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-02-10 09:36:14.953265 | orchestrator | Monday 10 February 2025 09:35:30 +0000 (0:00:01.142) 0:02:06.924 ******* 2025-02-10 09:36:14.953277 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:36:14.953290 | orchestrator | 2025-02-10 09:36:14.953302 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-02-10 09:36:14.953314 | orchestrator | Monday 10 February 2025 09:35:31 +0000 (0:00:01.606) 0:02:08.530 ******* 2025-02-10 09:36:14.953326 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-02-10 09:36:14.953339 | orchestrator | 2025-02-10 09:36:14.953350 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-02-10 09:36:14.953360 | orchestrator | Monday 10 February 2025 09:35:41 +0000 (0:00:09.641) 0:02:18.171 ******* 2025-02-10 09:36:14.953370 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-02-10 09:36:14.953380 | orchestrator | 2025-02-10 09:36:14.953390 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-02-10 09:36:14.953400 | orchestrator | Monday 10 February 2025 09:36:01 +0000 (0:00:20.341) 0:02:38.513 ******* 2025-02-10 09:36:14.953420 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-02-10 09:36:17.997213 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-02-10 09:36:17.997357 | orchestrator | 2025-02-10 09:36:17.997378 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-02-10 09:36:17.997394 | orchestrator | Monday 10 February 2025 09:36:09 +0000 (0:00:07.301) 0:02:45.814 ******* 2025-02-10 09:36:17.997409 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:17.997424 | orchestrator | 2025-02-10 09:36:17.997439 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-02-10 09:36:17.997507 | orchestrator | Monday 10 February 2025 09:36:09 +0000 (0:00:00.148) 0:02:45.963 ******* 2025-02-10 09:36:17.997524 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:17.997538 | orchestrator | 2025-02-10 09:36:17.997552 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-02-10 09:36:17.997566 | orchestrator | Monday 10 February 2025 09:36:09 +0000 (0:00:00.102) 0:02:46.065 ******* 2025-02-10 09:36:17.997580 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:17.997593 | orchestrator | 2025-02-10 09:36:17.997607 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-02-10 09:36:17.997621 | orchestrator | Monday 10 February 2025 09:36:09 +0000 (0:00:00.116) 0:02:46.181 ******* 2025-02-10 09:36:17.997635 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:17.997649 | orchestrator | 2025-02-10 09:36:17.997663 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-02-10 09:36:17.997677 | orchestrator | Monday 10 February 2025 09:36:09 +0000 (0:00:00.381) 0:02:46.563 ******* 2025-02-10 09:36:17.997691 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:36:17.997705 | orchestrator | 2025-02-10 09:36:17.997719 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-02-10 09:36:17.997732 | orchestrator | Monday 10 February 2025 09:36:13 +0000 (0:00:03.504) 0:02:50.067 ******* 2025-02-10 09:36:17.997746 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:17.997760 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:36:17.997776 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:36:17.997792 | orchestrator | 2025-02-10 09:36:17.997808 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:36:17.997824 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-02-10 09:36:17.997842 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-02-10 09:36:17.997857 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-02-10 09:36:17.997873 | orchestrator | 2025-02-10 09:36:17.997888 | orchestrator | 2025-02-10 09:36:17.997903 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:36:17.997919 | orchestrator | Monday 10 February 2025 09:36:13 +0000 (0:00:00.484) 0:02:50.551 ******* 2025-02-10 09:36:17.997952 | orchestrator | =============================================================================== 2025-02-10 09:36:17.997968 | orchestrator | service-ks-register : keystone | Creating services --------------------- 20.34s 2025-02-10 09:36:17.997983 | orchestrator | keystone : Copying files for keystone-fernet --------------------------- 15.25s 2025-02-10 09:36:17.997998 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 14.26s 2025-02-10 09:36:17.998100 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.19s 2025-02-10 09:36:17.998116 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.05s 2025-02-10 09:36:17.998132 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint ---- 9.64s 2025-02-10 09:36:17.998147 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 8.63s 2025-02-10 09:36:17.998161 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 8.60s 2025-02-10 09:36:17.998175 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.30s 2025-02-10 09:36:17.998189 | orchestrator | keystone : Restart keystone container ----------------------------------- 5.37s 2025-02-10 09:36:17.998202 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.95s 2025-02-10 09:36:17.998216 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.79s 2025-02-10 09:36:17.998230 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.67s 2025-02-10 09:36:17.998256 | orchestrator | keystone : Creating default user role ----------------------------------- 3.50s 2025-02-10 09:36:17.998270 | orchestrator | keystone : Check keystone containers ------------------------------------ 3.41s 2025-02-10 09:36:17.998284 | orchestrator | keystone : Copying over existing policy file ---------------------------- 3.06s 2025-02-10 09:36:17.998297 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 2.74s 2025-02-10 09:36:17.998311 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.73s 2025-02-10 09:36:17.998324 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.55s 2025-02-10 09:36:17.998338 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.49s 2025-02-10 09:36:17.998354 | orchestrator | 2025-02-10 09:36:14 | INFO  | Task 3ac52321-fc6f-48a6-846d-cc3532c274e5 is in state STARTED 2025-02-10 09:36:17.998368 | orchestrator | 2025-02-10 09:36:14 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:36:17.998402 | orchestrator | 2025-02-10 09:36:17 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:36:18.000172 | orchestrator | 2025-02-10 09:36:17 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:36:18.000211 | orchestrator | 2025-02-10 09:36:18 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:36:18.007428 | orchestrator | 2025-02-10 09:36:18 | INFO  | Task 3ac52321-fc6f-48a6-846d-cc3532c274e5 is in state STARTED 2025-02-10 09:36:18.011825 | orchestrator | 2025-02-10 09:36:18 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:36:21.051669 | orchestrator | 2025-02-10 09:36:18 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:36:21.051831 | orchestrator | 2025-02-10 09:36:21 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:36:21.052361 | orchestrator | 2025-02-10 09:36:21 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:36:21.055486 | orchestrator | 2025-02-10 09:36:21 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:36:21.058236 | orchestrator | 2025-02-10 09:36:21 | INFO  | Task 3ac52321-fc6f-48a6-846d-cc3532c274e5 is in state STARTED 2025-02-10 09:36:21.061502 | orchestrator | 2025-02-10 09:36:21 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:36:24.123692 | orchestrator | 2025-02-10 09:36:21 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:36:24.123863 | orchestrator | 2025-02-10 09:36:24 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:36:24.124224 | orchestrator | 2025-02-10 09:36:24 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:36:24.127253 | orchestrator | 2025-02-10 09:36:24 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:36:24.129673 | orchestrator | 2025-02-10 09:36:24 | INFO  | Task 3ac52321-fc6f-48a6-846d-cc3532c274e5 is in state STARTED 2025-02-10 09:36:24.133226 | orchestrator | 2025-02-10 09:36:24 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:36:24.134162 | orchestrator | 2025-02-10 09:36:24 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:36:27.185862 | orchestrator | 2025-02-10 09:36:27 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:36:27.186254 | orchestrator | 2025-02-10 09:36:27 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:36:27.187490 | orchestrator | 2025-02-10 09:36:27 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:36:27.189648 | orchestrator | 2025-02-10 09:36:27 | INFO  | Task 3ac52321-fc6f-48a6-846d-cc3532c274e5 is in state STARTED 2025-02-10 09:36:27.191143 | orchestrator | 2025-02-10 09:36:27 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:36:30.250826 | orchestrator | 2025-02-10 09:36:27 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:36:30.250983 | orchestrator | 2025-02-10 09:36:30 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:36:30.252418 | orchestrator | 2025-02-10 09:36:30 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:36:30.254443 | orchestrator | 2025-02-10 09:36:30 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:36:30.258258 | orchestrator | 2025-02-10 09:36:30 | INFO  | Task 3ac52321-fc6f-48a6-846d-cc3532c274e5 is in state STARTED 2025-02-10 09:36:30.260840 | orchestrator | 2025-02-10 09:36:30 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:36:33.315858 | orchestrator | 2025-02-10 09:36:30 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:36:33.316045 | orchestrator | 2025-02-10 09:36:33 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:36:33.316391 | orchestrator | 2025-02-10 09:36:33 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:36:33.316425 | orchestrator | 2025-02-10 09:36:33 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:36:33.316446 | orchestrator | 2025-02-10 09:36:33 | INFO  | Task 3ac52321-fc6f-48a6-846d-cc3532c274e5 is in state STARTED 2025-02-10 09:36:33.317396 | orchestrator | 2025-02-10 09:36:33 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:36:36.360755 | orchestrator | 2025-02-10 09:36:33 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:36:36.360914 | orchestrator | 2025-02-10 09:36:36 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:36:36.361191 | orchestrator | 2025-02-10 09:36:36 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:36:36.362145 | orchestrator | 2025-02-10 09:36:36 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:36:36.363231 | orchestrator | 2025-02-10 09:36:36 | INFO  | Task 3ac52321-fc6f-48a6-846d-cc3532c274e5 is in state STARTED 2025-02-10 09:36:36.364265 | orchestrator | 2025-02-10 09:36:36 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:36:39.403098 | orchestrator | 2025-02-10 09:36:36 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:36:39.403394 | orchestrator | 2025-02-10 09:36:39 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:36:39.404859 | orchestrator | 2025-02-10 09:36:39 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:36:39.404892 | orchestrator | 2025-02-10 09:36:39 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:36:39.404913 | orchestrator | 2025-02-10 09:36:39 | INFO  | Task 3ac52321-fc6f-48a6-846d-cc3532c274e5 is in state STARTED 2025-02-10 09:36:39.407038 | orchestrator | 2025-02-10 09:36:39 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:36:42.451428 | orchestrator | 2025-02-10 09:36:39 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:36:42.451728 | orchestrator | 2025-02-10 09:36:42 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:36:42.453916 | orchestrator | 2025-02-10 09:36:42 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:36:42.454112 | orchestrator | 2025-02-10 09:36:42 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:36:42.457427 | orchestrator | 2025-02-10 09:36:42 | INFO  | Task 3ac52321-fc6f-48a6-846d-cc3532c274e5 is in state STARTED 2025-02-10 09:36:42.458757 | orchestrator | 2025-02-10 09:36:42 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:36:42.458918 | orchestrator | 2025-02-10 09:36:42 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:36:45.522836 | orchestrator | 2025-02-10 09:36:45 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:36:45.523170 | orchestrator | 2025-02-10 09:36:45 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:36:45.524317 | orchestrator | 2025-02-10 09:36:45 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:36:45.525869 | orchestrator | 2025-02-10 09:36:45 | INFO  | Task 3ac52321-fc6f-48a6-846d-cc3532c274e5 is in state STARTED 2025-02-10 09:36:45.526446 | orchestrator | 2025-02-10 09:36:45 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:36:48.581994 | orchestrator | 2025-02-10 09:36:45 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:36:48.582254 | orchestrator | 2025-02-10 09:36:48 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:36:48.582791 | orchestrator | 2025-02-10 09:36:48 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:36:48.585006 | orchestrator | 2025-02-10 09:36:48 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:36:48.586522 | orchestrator | 2025-02-10 09:36:48 | INFO  | Task 3ac52321-fc6f-48a6-846d-cc3532c274e5 is in state STARTED 2025-02-10 09:36:48.587964 | orchestrator | 2025-02-10 09:36:48 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:36:51.627928 | orchestrator | 2025-02-10 09:36:48 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:36:51.628180 | orchestrator | 2025-02-10 09:36:51 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:36:51.629305 | orchestrator | 2025-02-10 09:36:51 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:36:51.629333 | orchestrator | 2025-02-10 09:36:51 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:36:51.631099 | orchestrator | 2025-02-10 09:36:51 | INFO  | Task 3ac52321-fc6f-48a6-846d-cc3532c274e5 is in state STARTED 2025-02-10 09:36:51.632442 | orchestrator | 2025-02-10 09:36:51 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:36:54.680762 | orchestrator | 2025-02-10 09:36:51 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:36:54.680932 | orchestrator | 2025-02-10 09:36:54 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:36:54.682575 | orchestrator | 2025-02-10 09:36:54 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:36:54.684724 | orchestrator | 2025-02-10 09:36:54 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:36:54.687134 | orchestrator | 2025-02-10 09:36:54 | INFO  | Task 3ac52321-fc6f-48a6-846d-cc3532c274e5 is in state STARTED 2025-02-10 09:36:54.688670 | orchestrator | 2025-02-10 09:36:54 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:36:57.736253 | orchestrator | 2025-02-10 09:36:54 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:36:57.736440 | orchestrator | 2025-02-10 09:36:57 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:36:57.737797 | orchestrator | 2025-02-10 09:36:57 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:36:57.743324 | orchestrator | 2025-02-10 09:36:57 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:36:57.744298 | orchestrator | 2025-02-10 09:36:57 | INFO  | Task 3ac52321-fc6f-48a6-846d-cc3532c274e5 is in state STARTED 2025-02-10 09:36:57.744330 | orchestrator | 2025-02-10 09:36:57 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:37:00.773322 | orchestrator | 2025-02-10 09:36:57 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:00.773473 | orchestrator | 2025-02-10 09:37:00 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:37:00.773721 | orchestrator | 2025-02-10 09:37:00 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:37:00.773756 | orchestrator | 2025-02-10 09:37:00 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:37:00.774513 | orchestrator | 2025-02-10 09:37:00 | INFO  | Task 3ac52321-fc6f-48a6-846d-cc3532c274e5 is in state STARTED 2025-02-10 09:37:00.776334 | orchestrator | 2025-02-10 09:37:00 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:37:03.818979 | orchestrator | 2025-02-10 09:37:00 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:03.819180 | orchestrator | 2025-02-10 09:37:03 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:37:03.823588 | orchestrator | 2025-02-10 09:37:03 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:37:03.827520 | orchestrator | 2025-02-10 09:37:03 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:37:03.827655 | orchestrator | 2025-02-10 09:37:03 | INFO  | Task 3ac52321-fc6f-48a6-846d-cc3532c274e5 is in state STARTED 2025-02-10 09:37:03.830321 | orchestrator | 2025-02-10 09:37:03 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:37:06.869693 | orchestrator | 2025-02-10 09:37:03 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:06.869838 | orchestrator | 2025-02-10 09:37:06 | INFO  | Task d518bc68-5da0-4b67-b72f-6fa0955df179 is in state STARTED 2025-02-10 09:37:06.871435 | orchestrator | 2025-02-10 09:37:06 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:37:06.873161 | orchestrator | 2025-02-10 09:37:06 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:37:06.874373 | orchestrator | 2025-02-10 09:37:06 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:37:06.876566 | orchestrator | 2025-02-10 09:37:06 | INFO  | Task 3ac52321-fc6f-48a6-846d-cc3532c274e5 is in state SUCCESS 2025-02-10 09:37:06.877896 | orchestrator | 2025-02-10 09:37:06 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:37:09.919966 | orchestrator | 2025-02-10 09:37:06 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:09.920299 | orchestrator | 2025-02-10 09:37:09 | INFO  | Task d518bc68-5da0-4b67-b72f-6fa0955df179 is in state STARTED 2025-02-10 09:37:09.921106 | orchestrator | 2025-02-10 09:37:09 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:37:09.921154 | orchestrator | 2025-02-10 09:37:09 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:37:09.921233 | orchestrator | 2025-02-10 09:37:09 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:37:09.921260 | orchestrator | 2025-02-10 09:37:09 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:37:12.967337 | orchestrator | 2025-02-10 09:37:09 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:12.967607 | orchestrator | 2025-02-10 09:37:12 | INFO  | Task d518bc68-5da0-4b67-b72f-6fa0955df179 is in state STARTED 2025-02-10 09:37:12.968768 | orchestrator | 2025-02-10 09:37:12 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:37:12.968826 | orchestrator | 2025-02-10 09:37:12 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:37:12.968852 | orchestrator | 2025-02-10 09:37:12 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:37:12.970095 | orchestrator | 2025-02-10 09:37:12 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:37:16.030008 | orchestrator | 2025-02-10 09:37:12 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:16.030416 | orchestrator | 2025-02-10 09:37:16 | INFO  | Task d518bc68-5da0-4b67-b72f-6fa0955df179 is in state STARTED 2025-02-10 09:37:16.031486 | orchestrator | 2025-02-10 09:37:16 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:37:16.031527 | orchestrator | 2025-02-10 09:37:16 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:37:16.036183 | orchestrator | 2025-02-10 09:37:16 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:37:16.038469 | orchestrator | 2025-02-10 09:37:16 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:37:19.112593 | orchestrator | 2025-02-10 09:37:16 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:19.112757 | orchestrator | 2025-02-10 09:37:19 | INFO  | Task d518bc68-5da0-4b67-b72f-6fa0955df179 is in state STARTED 2025-02-10 09:37:19.115922 | orchestrator | 2025-02-10 09:37:19 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:37:19.116000 | orchestrator | 2025-02-10 09:37:19 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:37:19.116344 | orchestrator | 2025-02-10 09:37:19 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:37:19.116393 | orchestrator | 2025-02-10 09:37:19 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:37:22.167559 | orchestrator | 2025-02-10 09:37:19 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:22.167713 | orchestrator | 2025-02-10 09:37:22 | INFO  | Task d518bc68-5da0-4b67-b72f-6fa0955df179 is in state STARTED 2025-02-10 09:37:22.172094 | orchestrator | 2025-02-10 09:37:22 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:37:22.172196 | orchestrator | 2025-02-10 09:37:22 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:37:22.176235 | orchestrator | 2025-02-10 09:37:22 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:37:22.177208 | orchestrator | 2025-02-10 09:37:22 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:37:25.238636 | orchestrator | 2025-02-10 09:37:22 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:25.238815 | orchestrator | 2025-02-10 09:37:25 | INFO  | Task d518bc68-5da0-4b67-b72f-6fa0955df179 is in state STARTED 2025-02-10 09:37:25.241465 | orchestrator | 2025-02-10 09:37:25 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:37:25.242503 | orchestrator | 2025-02-10 09:37:25 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:37:25.242542 | orchestrator | 2025-02-10 09:37:25 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:37:25.243740 | orchestrator | 2025-02-10 09:37:25 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:37:28.296686 | orchestrator | 2025-02-10 09:37:25 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:28.296843 | orchestrator | 2025-02-10 09:37:28 | INFO  | Task d518bc68-5da0-4b67-b72f-6fa0955df179 is in state STARTED 2025-02-10 09:37:28.298822 | orchestrator | 2025-02-10 09:37:28 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:37:28.298874 | orchestrator | 2025-02-10 09:37:28 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:37:28.305920 | orchestrator | 2025-02-10 09:37:28 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:37:28.308886 | orchestrator | 2025-02-10 09:37:28 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:37:31.347351 | orchestrator | 2025-02-10 09:37:28 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:31.347488 | orchestrator | 2025-02-10 09:37:31 | INFO  | Task d518bc68-5da0-4b67-b72f-6fa0955df179 is in state STARTED 2025-02-10 09:37:31.348474 | orchestrator | 2025-02-10 09:37:31 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:37:31.348907 | orchestrator | 2025-02-10 09:37:31 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:37:31.350538 | orchestrator | 2025-02-10 09:37:31 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:37:31.352172 | orchestrator | 2025-02-10 09:37:31 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:37:31.352340 | orchestrator | 2025-02-10 09:37:31 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:34.380257 | orchestrator | 2025-02-10 09:37:34 | INFO  | Task d518bc68-5da0-4b67-b72f-6fa0955df179 is in state STARTED 2025-02-10 09:37:34.381311 | orchestrator | 2025-02-10 09:37:34 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:37:34.382172 | orchestrator | 2025-02-10 09:37:34 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:37:34.382215 | orchestrator | 2025-02-10 09:37:34 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:37:34.382896 | orchestrator | 2025-02-10 09:37:34 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:37:37.417922 | orchestrator | 2025-02-10 09:37:34 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:37.418190 | orchestrator | 2025-02-10 09:37:37 | INFO  | Task d518bc68-5da0-4b67-b72f-6fa0955df179 is in state STARTED 2025-02-10 09:37:37.418311 | orchestrator | 2025-02-10 09:37:37 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:37:37.418333 | orchestrator | 2025-02-10 09:37:37 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:37:37.419495 | orchestrator | 2025-02-10 09:37:37 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:37:37.420350 | orchestrator | 2025-02-10 09:37:37 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:37:40.473919 | orchestrator | 2025-02-10 09:37:37 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:40.474254 | orchestrator | 2025-02-10 09:37:40 | INFO  | Task d518bc68-5da0-4b67-b72f-6fa0955df179 is in state STARTED 2025-02-10 09:37:40.476440 | orchestrator | 2025-02-10 09:37:40 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:37:40.476509 | orchestrator | 2025-02-10 09:37:40 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:37:40.478717 | orchestrator | 2025-02-10 09:37:40 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:37:40.482897 | orchestrator | 2025-02-10 09:37:40 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:37:43.530315 | orchestrator | 2025-02-10 09:37:40 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:43.530502 | orchestrator | 2025-02-10 09:37:43.530525 | orchestrator | 2025-02-10 09:37:43.530542 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-02-10 09:37:43.530558 | orchestrator | 2025-02-10 09:37:43.530573 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-02-10 09:37:43.530588 | orchestrator | Monday 10 February 2025 09:36:11 +0000 (0:00:00.153) 0:00:00.153 ******* 2025-02-10 09:37:43.530602 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-02-10 09:37:43.530618 | orchestrator | 2025-02-10 09:37:43.530633 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-02-10 09:37:43.530647 | orchestrator | Monday 10 February 2025 09:36:12 +0000 (0:00:00.223) 0:00:00.377 ******* 2025-02-10 09:37:43.530662 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-02-10 09:37:43.530676 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-02-10 09:37:43.530691 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-02-10 09:37:43.530705 | orchestrator | 2025-02-10 09:37:43.530719 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-02-10 09:37:43.530734 | orchestrator | Monday 10 February 2025 09:36:13 +0000 (0:00:01.218) 0:00:01.595 ******* 2025-02-10 09:37:43.530748 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-02-10 09:37:43.530762 | orchestrator | 2025-02-10 09:37:43.530776 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-02-10 09:37:43.530790 | orchestrator | Monday 10 February 2025 09:36:14 +0000 (0:00:01.277) 0:00:02.873 ******* 2025-02-10 09:37:43.530804 | orchestrator | changed: [testbed-manager] 2025-02-10 09:37:43.530819 | orchestrator | 2025-02-10 09:37:43.530833 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-02-10 09:37:43.530849 | orchestrator | Monday 10 February 2025 09:36:15 +0000 (0:00:00.872) 0:00:03.746 ******* 2025-02-10 09:37:43.530864 | orchestrator | changed: [testbed-manager] 2025-02-10 09:37:43.530880 | orchestrator | 2025-02-10 09:37:43.530901 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-02-10 09:37:43.530917 | orchestrator | Monday 10 February 2025 09:36:16 +0000 (0:00:00.981) 0:00:04.727 ******* 2025-02-10 09:37:43.530932 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-02-10 09:37:43.530948 | orchestrator | ok: [testbed-manager] 2025-02-10 09:37:43.530965 | orchestrator | 2025-02-10 09:37:43.530980 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-02-10 09:37:43.530995 | orchestrator | Monday 10 February 2025 09:36:53 +0000 (0:00:37.293) 0:00:42.021 ******* 2025-02-10 09:37:43.531010 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-02-10 09:37:43.531046 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-02-10 09:37:43.531063 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-02-10 09:37:43.531103 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-02-10 09:37:43.531119 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-02-10 09:37:43.531134 | orchestrator | 2025-02-10 09:37:43.531150 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-02-10 09:37:43.531166 | orchestrator | Monday 10 February 2025 09:36:57 +0000 (0:00:04.166) 0:00:46.187 ******* 2025-02-10 09:37:43.531182 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-02-10 09:37:43.531197 | orchestrator | 2025-02-10 09:37:43.531211 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-02-10 09:37:43.531225 | orchestrator | Monday 10 February 2025 09:36:58 +0000 (0:00:00.497) 0:00:46.684 ******* 2025-02-10 09:37:43.531239 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:37:43.531252 | orchestrator | 2025-02-10 09:37:43.531266 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-02-10 09:37:43.531279 | orchestrator | Monday 10 February 2025 09:36:58 +0000 (0:00:00.124) 0:00:46.809 ******* 2025-02-10 09:37:43.531293 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:37:43.531307 | orchestrator | 2025-02-10 09:37:43.531321 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-02-10 09:37:43.531514 | orchestrator | Monday 10 February 2025 09:36:58 +0000 (0:00:00.279) 0:00:47.088 ******* 2025-02-10 09:37:43.531531 | orchestrator | changed: [testbed-manager] 2025-02-10 09:37:43.531545 | orchestrator | 2025-02-10 09:37:43.531558 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-02-10 09:37:43.531572 | orchestrator | Monday 10 February 2025 09:37:01 +0000 (0:00:02.690) 0:00:49.779 ******* 2025-02-10 09:37:43.531586 | orchestrator | changed: [testbed-manager] 2025-02-10 09:37:43.531600 | orchestrator | 2025-02-10 09:37:43.531613 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-02-10 09:37:43.531627 | orchestrator | Monday 10 February 2025 09:37:02 +0000 (0:00:00.826) 0:00:50.605 ******* 2025-02-10 09:37:43.531641 | orchestrator | changed: [testbed-manager] 2025-02-10 09:37:43.531654 | orchestrator | 2025-02-10 09:37:43.531668 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-02-10 09:37:43.531682 | orchestrator | Monday 10 February 2025 09:37:02 +0000 (0:00:00.531) 0:00:51.137 ******* 2025-02-10 09:37:43.531696 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-02-10 09:37:43.531710 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-02-10 09:37:43.531723 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-02-10 09:37:43.531737 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-02-10 09:37:43.531751 | orchestrator | 2025-02-10 09:37:43.531765 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:37:43.531779 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:37:43.531796 | orchestrator | 2025-02-10 09:37:43.531821 | orchestrator | Monday 10 February 2025 09:37:04 +0000 (0:00:01.496) 0:00:52.634 ******* 2025-02-10 09:37:43.532355 | orchestrator | =============================================================================== 2025-02-10 09:37:43.532381 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 37.29s 2025-02-10 09:37:43.532393 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.17s 2025-02-10 09:37:43.532406 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 2.69s 2025-02-10 09:37:43.532418 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.50s 2025-02-10 09:37:43.532430 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.28s 2025-02-10 09:37:43.532442 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.22s 2025-02-10 09:37:43.532455 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.98s 2025-02-10 09:37:43.532467 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.87s 2025-02-10 09:37:43.532492 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.83s 2025-02-10 09:37:43.532504 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.53s 2025-02-10 09:37:43.532523 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.50s 2025-02-10 09:37:43.532536 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.28s 2025-02-10 09:37:43.532548 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.22s 2025-02-10 09:37:43.532561 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.12s 2025-02-10 09:37:43.532573 | orchestrator | 2025-02-10 09:37:43.532585 | orchestrator | 2025-02-10 09:37:43 | INFO  | Task d518bc68-5da0-4b67-b72f-6fa0955df179 is in state SUCCESS 2025-02-10 09:37:43.532597 | orchestrator | 2025-02-10 09:37:43 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:37:43.532610 | orchestrator | 2025-02-10 09:37:43 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:37:43.532622 | orchestrator | 2025-02-10 09:37:43 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:37:43.532641 | orchestrator | 2025-02-10 09:37:43 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:37:46.577586 | orchestrator | 2025-02-10 09:37:43 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:46.577749 | orchestrator | 2025-02-10 09:37:46 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:37:46.578803 | orchestrator | 2025-02-10 09:37:46 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:37:46.578854 | orchestrator | 2025-02-10 09:37:46 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:37:46.581562 | orchestrator | 2025-02-10 09:37:46 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:37:49.620388 | orchestrator | 2025-02-10 09:37:46 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:49.620577 | orchestrator | 2025-02-10 09:37:49 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:37:49.620927 | orchestrator | 2025-02-10 09:37:49 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:37:49.620964 | orchestrator | 2025-02-10 09:37:49 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:37:49.622735 | orchestrator | 2025-02-10 09:37:49 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:37:52.661509 | orchestrator | 2025-02-10 09:37:49 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:52.661668 | orchestrator | 2025-02-10 09:37:52 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:37:52.661960 | orchestrator | 2025-02-10 09:37:52 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:37:52.663580 | orchestrator | 2025-02-10 09:37:52 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:37:52.666143 | orchestrator | 2025-02-10 09:37:52 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:37:55.715537 | orchestrator | 2025-02-10 09:37:52 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:55.715730 | orchestrator | 2025-02-10 09:37:55 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:37:55.718537 | orchestrator | 2025-02-10 09:37:55 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:37:55.718699 | orchestrator | 2025-02-10 09:37:55 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:37:55.721318 | orchestrator | 2025-02-10 09:37:55 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:37:58.763507 | orchestrator | 2025-02-10 09:37:55 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:58.763672 | orchestrator | 2025-02-10 09:37:58 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:37:58.764295 | orchestrator | 2025-02-10 09:37:58 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:37:58.764325 | orchestrator | 2025-02-10 09:37:58 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:37:58.765809 | orchestrator | 2025-02-10 09:37:58 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:37:58.766812 | orchestrator | 2025-02-10 09:37:58 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:01.806704 | orchestrator | 2025-02-10 09:38:01 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:38:01.807797 | orchestrator | 2025-02-10 09:38:01 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:38:01.807845 | orchestrator | 2025-02-10 09:38:01 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:38:01.808580 | orchestrator | 2025-02-10 09:38:01 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:38:04.851802 | orchestrator | 2025-02-10 09:38:01 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:04.851972 | orchestrator | 2025-02-10 09:38:04 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:38:04.853248 | orchestrator | 2025-02-10 09:38:04 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:38:04.853291 | orchestrator | 2025-02-10 09:38:04 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:38:04.856346 | orchestrator | 2025-02-10 09:38:04 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:38:07.905846 | orchestrator | 2025-02-10 09:38:04 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:07.906000 | orchestrator | 2025-02-10 09:38:07 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:38:07.906238 | orchestrator | 2025-02-10 09:38:07 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:38:07.906272 | orchestrator | 2025-02-10 09:38:07 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:38:07.906966 | orchestrator | 2025-02-10 09:38:07 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:38:10.935554 | orchestrator | 2025-02-10 09:38:07 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:10.935690 | orchestrator | 2025-02-10 09:38:10 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:38:13.966957 | orchestrator | 2025-02-10 09:38:10 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:38:13.967195 | orchestrator | 2025-02-10 09:38:10 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:38:13.967221 | orchestrator | 2025-02-10 09:38:10 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:38:13.967237 | orchestrator | 2025-02-10 09:38:10 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:13.967272 | orchestrator | 2025-02-10 09:38:13 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:38:13.968215 | orchestrator | 2025-02-10 09:38:13 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:38:13.968253 | orchestrator | 2025-02-10 09:38:13 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:38:13.969613 | orchestrator | 2025-02-10 09:38:13 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:38:13.972071 | orchestrator | 2025-02-10 09:38:13 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:17.022392 | orchestrator | 2025-02-10 09:38:17 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:38:17.022605 | orchestrator | 2025-02-10 09:38:17 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:38:17.023557 | orchestrator | 2025-02-10 09:38:17 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:38:17.024400 | orchestrator | 2025-02-10 09:38:17 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:38:20.085608 | orchestrator | 2025-02-10 09:38:17 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:20.085838 | orchestrator | 2025-02-10 09:38:20 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:38:20.086192 | orchestrator | 2025-02-10 09:38:20 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:38:20.086218 | orchestrator | 2025-02-10 09:38:20 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:38:20.086234 | orchestrator | 2025-02-10 09:38:20 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:38:23.125389 | orchestrator | 2025-02-10 09:38:20 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:23.125549 | orchestrator | 2025-02-10 09:38:23 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:38:23.126193 | orchestrator | 2025-02-10 09:38:23 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:38:23.126220 | orchestrator | 2025-02-10 09:38:23 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:38:23.127417 | orchestrator | 2025-02-10 09:38:23 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:38:26.155887 | orchestrator | 2025-02-10 09:38:23 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:26.156099 | orchestrator | 2025-02-10 09:38:26 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:38:26.156520 | orchestrator | 2025-02-10 09:38:26 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:38:26.156558 | orchestrator | 2025-02-10 09:38:26 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:38:26.157331 | orchestrator | 2025-02-10 09:38:26 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:38:29.187787 | orchestrator | 2025-02-10 09:38:26 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:29.187948 | orchestrator | 2025-02-10 09:38:29 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:38:29.188485 | orchestrator | 2025-02-10 09:38:29 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:38:29.188527 | orchestrator | 2025-02-10 09:38:29 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:38:29.189444 | orchestrator | 2025-02-10 09:38:29 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:38:32.230731 | orchestrator | 2025-02-10 09:38:29 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:32.230886 | orchestrator | 2025-02-10 09:38:32 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:38:32.233705 | orchestrator | 2025-02-10 09:38:32 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:38:32.234716 | orchestrator | 2025-02-10 09:38:32 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:38:32.235878 | orchestrator | 2025-02-10 09:38:32 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:38:32.236138 | orchestrator | 2025-02-10 09:38:32 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:35.288661 | orchestrator | 2025-02-10 09:38:35 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:38:35.289186 | orchestrator | 2025-02-10 09:38:35 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:38:35.290638 | orchestrator | 2025-02-10 09:38:35 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:38:35.291734 | orchestrator | 2025-02-10 09:38:35 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:38:38.338829 | orchestrator | 2025-02-10 09:38:35 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:38.338971 | orchestrator | 2025-02-10 09:38:38 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:38:38.339504 | orchestrator | 2025-02-10 09:38:38 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:38:38.339566 | orchestrator | 2025-02-10 09:38:38 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:38:38.340056 | orchestrator | 2025-02-10 09:38:38 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:38:38.340170 | orchestrator | 2025-02-10 09:38:38 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:41.390690 | orchestrator | 2025-02-10 09:38:41 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:38:41.393183 | orchestrator | 2025-02-10 09:38:41 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:38:41.393304 | orchestrator | 2025-02-10 09:38:41 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:38:41.394413 | orchestrator | 2025-02-10 09:38:41 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:38:44.426368 | orchestrator | 2025-02-10 09:38:41 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:44.426508 | orchestrator | 2025-02-10 09:38:44 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:38:44.426947 | orchestrator | 2025-02-10 09:38:44 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:38:44.428777 | orchestrator | 2025-02-10 09:38:44 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:38:44.428814 | orchestrator | 2025-02-10 09:38:44 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:38:47.478637 | orchestrator | 2025-02-10 09:38:44 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:47.478807 | orchestrator | 2025-02-10 09:38:47 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:38:47.479141 | orchestrator | 2025-02-10 09:38:47 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:38:47.479254 | orchestrator | 2025-02-10 09:38:47 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:38:47.480079 | orchestrator | 2025-02-10 09:38:47 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:38:50.522756 | orchestrator | 2025-02-10 09:38:47 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:50.522911 | orchestrator | 2025-02-10 09:38:50 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:38:50.523285 | orchestrator | 2025-02-10 09:38:50 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:38:50.524003 | orchestrator | 2025-02-10 09:38:50 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:38:50.525131 | orchestrator | 2025-02-10 09:38:50 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:38:53.574571 | orchestrator | 2025-02-10 09:38:50 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:53.574703 | orchestrator | 2025-02-10 09:38:53 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:38:56.643541 | orchestrator | 2025-02-10 09:38:53 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:38:56.643681 | orchestrator | 2025-02-10 09:38:53 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state STARTED 2025-02-10 09:38:56.643702 | orchestrator | 2025-02-10 09:38:53 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:38:56.643719 | orchestrator | 2025-02-10 09:38:53 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:56.643758 | orchestrator | 2025-02-10 09:38:56 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:38:56.645862 | orchestrator | 2025-02-10 09:38:56 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:38:56.645903 | orchestrator | 2025-02-10 09:38:56 | INFO  | Task 5a120d4a-388c-4ae4-8704-51aaa627c3bb is in state SUCCESS 2025-02-10 09:38:56.645928 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-02-10 09:38:56.645945 | orchestrator | 2025-02-10 09:38:56.645961 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-02-10 09:38:56.645976 | orchestrator | 2025-02-10 09:38:56.646199 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-02-10 09:38:56.646228 | orchestrator | Monday 10 February 2025 09:37:08 +0000 (0:00:00.567) 0:00:00.567 ******* 2025-02-10 09:38:56.646244 | orchestrator | changed: [testbed-manager] 2025-02-10 09:38:56.646730 | orchestrator | 2025-02-10 09:38:56.646760 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-02-10 09:38:56.646777 | orchestrator | Monday 10 February 2025 09:37:10 +0000 (0:00:02.416) 0:00:02.984 ******* 2025-02-10 09:38:56.646792 | orchestrator | changed: [testbed-manager] 2025-02-10 09:38:56.646829 | orchestrator | 2025-02-10 09:38:56.646845 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-02-10 09:38:56.646859 | orchestrator | Monday 10 February 2025 09:37:11 +0000 (0:00:01.181) 0:00:04.165 ******* 2025-02-10 09:38:56.646874 | orchestrator | changed: [testbed-manager] 2025-02-10 09:38:56.646888 | orchestrator | 2025-02-10 09:38:56.646903 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-02-10 09:38:56.646917 | orchestrator | Monday 10 February 2025 09:37:13 +0000 (0:00:01.461) 0:00:05.627 ******* 2025-02-10 09:38:56.646931 | orchestrator | changed: [testbed-manager] 2025-02-10 09:38:56.646945 | orchestrator | 2025-02-10 09:38:56.646959 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-02-10 09:38:56.646972 | orchestrator | Monday 10 February 2025 09:37:14 +0000 (0:00:01.258) 0:00:06.886 ******* 2025-02-10 09:38:56.646986 | orchestrator | changed: [testbed-manager] 2025-02-10 09:38:56.647000 | orchestrator | 2025-02-10 09:38:56.647014 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-02-10 09:38:56.647077 | orchestrator | Monday 10 February 2025 09:37:15 +0000 (0:00:01.230) 0:00:08.116 ******* 2025-02-10 09:38:56.647093 | orchestrator | changed: [testbed-manager] 2025-02-10 09:38:56.647107 | orchestrator | 2025-02-10 09:38:56.647121 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-02-10 09:38:56.647134 | orchestrator | Monday 10 February 2025 09:37:17 +0000 (0:00:01.747) 0:00:09.864 ******* 2025-02-10 09:38:56.647148 | orchestrator | changed: [testbed-manager] 2025-02-10 09:38:56.647162 | orchestrator | 2025-02-10 09:38:56.647176 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-02-10 09:38:56.647190 | orchestrator | Monday 10 February 2025 09:37:19 +0000 (0:00:01.905) 0:00:11.769 ******* 2025-02-10 09:38:56.647204 | orchestrator | changed: [testbed-manager] 2025-02-10 09:38:56.647217 | orchestrator | 2025-02-10 09:38:56.647231 | orchestrator | TASK [Create admin user] ******************************************************* 2025-02-10 09:38:56.647245 | orchestrator | Monday 10 February 2025 09:37:20 +0000 (0:00:01.189) 0:00:12.959 ******* 2025-02-10 09:38:56.647258 | orchestrator | changed: [testbed-manager] 2025-02-10 09:38:56.647272 | orchestrator | 2025-02-10 09:38:56.647286 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-02-10 09:38:56.647300 | orchestrator | Monday 10 February 2025 09:37:35 +0000 (0:00:15.153) 0:00:28.113 ******* 2025-02-10 09:38:56.647314 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:38:56.647328 | orchestrator | 2025-02-10 09:38:56.647341 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-02-10 09:38:56.647355 | orchestrator | 2025-02-10 09:38:56.647369 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-02-10 09:38:56.647383 | orchestrator | Monday 10 February 2025 09:37:36 +0000 (0:00:00.925) 0:00:29.038 ******* 2025-02-10 09:38:56.647396 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:38:56.647410 | orchestrator | 2025-02-10 09:38:56.647424 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-02-10 09:38:56.647438 | orchestrator | 2025-02-10 09:38:56.647451 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-02-10 09:38:56.647465 | orchestrator | Monday 10 February 2025 09:37:38 +0000 (0:00:02.114) 0:00:31.152 ******* 2025-02-10 09:38:56.647479 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:38:56.647492 | orchestrator | 2025-02-10 09:38:56.647515 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-02-10 09:38:56.647529 | orchestrator | 2025-02-10 09:38:56.647543 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-02-10 09:38:56.647557 | orchestrator | Monday 10 February 2025 09:37:40 +0000 (0:00:01.732) 0:00:32.884 ******* 2025-02-10 09:38:56.647571 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:38:56.647584 | orchestrator | 2025-02-10 09:38:56.647598 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:38:56.647613 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-10 09:38:56.647630 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:38:56.647644 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:38:56.647658 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:38:56.647672 | orchestrator | 2025-02-10 09:38:56.647685 | orchestrator | 2025-02-10 09:38:56.647699 | orchestrator | 2025-02-10 09:38:56.647713 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:38:56.647727 | orchestrator | Monday 10 February 2025 09:37:42 +0000 (0:00:01.772) 0:00:34.656 ******* 2025-02-10 09:38:56.647749 | orchestrator | =============================================================================== 2025-02-10 09:38:56.647763 | orchestrator | Create admin user ------------------------------------------------------ 15.15s 2025-02-10 09:38:56.647825 | orchestrator | Restart ceph manager service -------------------------------------------- 5.62s 2025-02-10 09:38:56.647842 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.42s 2025-02-10 09:38:56.647856 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.91s 2025-02-10 09:38:56.647870 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.75s 2025-02-10 09:38:56.647884 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.46s 2025-02-10 09:38:56.647898 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.26s 2025-02-10 09:38:56.647912 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.23s 2025-02-10 09:38:56.647925 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.19s 2025-02-10 09:38:56.647939 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.18s 2025-02-10 09:38:56.647953 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.93s 2025-02-10 09:38:56.647966 | orchestrator | 2025-02-10 09:38:56.647980 | orchestrator | 2025-02-10 09:38:56.647994 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:38:56.648007 | orchestrator | 2025-02-10 09:38:56.648021 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:38:56.648035 | orchestrator | Monday 10 February 2025 09:36:18 +0000 (0:00:00.332) 0:00:00.332 ******* 2025-02-10 09:38:56.648077 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:38:56.648092 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:38:56.648106 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:38:56.648120 | orchestrator | 2025-02-10 09:38:56.648133 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:38:56.648147 | orchestrator | Monday 10 February 2025 09:36:19 +0000 (0:00:00.530) 0:00:00.862 ******* 2025-02-10 09:38:56.648161 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-02-10 09:38:56.648175 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-02-10 09:38:56.648188 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-02-10 09:38:56.648202 | orchestrator | 2025-02-10 09:38:56.648216 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-02-10 09:38:56.648229 | orchestrator | 2025-02-10 09:38:56.648243 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-02-10 09:38:56.648257 | orchestrator | Monday 10 February 2025 09:36:19 +0000 (0:00:00.365) 0:00:01.227 ******* 2025-02-10 09:38:56.648271 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:38:56.648287 | orchestrator | 2025-02-10 09:38:56.648300 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-02-10 09:38:56.648314 | orchestrator | Monday 10 February 2025 09:36:20 +0000 (0:00:01.328) 0:00:02.556 ******* 2025-02-10 09:38:56.648328 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-02-10 09:38:56.648341 | orchestrator | 2025-02-10 09:38:56.648355 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-02-10 09:38:56.648369 | orchestrator | Monday 10 February 2025 09:36:25 +0000 (0:00:04.343) 0:00:06.899 ******* 2025-02-10 09:38:56.648382 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-02-10 09:38:56.648396 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-02-10 09:38:56.648410 | orchestrator | 2025-02-10 09:38:56.648424 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-02-10 09:38:56.648444 | orchestrator | Monday 10 February 2025 09:36:32 +0000 (0:00:07.345) 0:00:14.244 ******* 2025-02-10 09:38:56.648458 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-02-10 09:38:56.648479 | orchestrator | 2025-02-10 09:38:56.648492 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-02-10 09:38:56.648506 | orchestrator | Monday 10 February 2025 09:36:35 +0000 (0:00:03.201) 0:00:17.446 ******* 2025-02-10 09:38:56.648520 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-02-10 09:38:56.648534 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-02-10 09:38:56.648547 | orchestrator | 2025-02-10 09:38:56.648561 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-02-10 09:38:56.648574 | orchestrator | Monday 10 February 2025 09:36:39 +0000 (0:00:03.553) 0:00:20.999 ******* 2025-02-10 09:38:56.648588 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-02-10 09:38:56.648602 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-02-10 09:38:56.648615 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-02-10 09:38:56.648629 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-02-10 09:38:56.648643 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-02-10 09:38:56.648657 | orchestrator | 2025-02-10 09:38:56.648671 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-02-10 09:38:56.648684 | orchestrator | Monday 10 February 2025 09:36:56 +0000 (0:00:17.232) 0:00:38.232 ******* 2025-02-10 09:38:56.648698 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-02-10 09:38:56.648712 | orchestrator | 2025-02-10 09:38:56.648726 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-02-10 09:38:56.648746 | orchestrator | Monday 10 February 2025 09:37:00 +0000 (0:00:04.057) 0:00:42.289 ******* 2025-02-10 09:38:56.648772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-10 09:38:56.648797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-10 09:38:56.648812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-10 09:38:56.648835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:56.648852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:56.648875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:56.648891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:56.648905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:56.648928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:56.648942 | orchestrator | 2025-02-10 09:38:56.648956 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-02-10 09:38:56.648970 | orchestrator | Monday 10 February 2025 09:37:04 +0000 (0:00:03.563) 0:00:45.852 ******* 2025-02-10 09:38:56.648984 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-02-10 09:38:56.648998 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-02-10 09:38:56.649011 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-02-10 09:38:56.649025 | orchestrator | 2025-02-10 09:38:56.649093 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-02-10 09:38:56.649109 | orchestrator | Monday 10 February 2025 09:37:06 +0000 (0:00:02.061) 0:00:47.914 ******* 2025-02-10 09:38:56.649124 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:38:56.649138 | orchestrator | 2025-02-10 09:38:56.649152 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-02-10 09:38:56.649166 | orchestrator | Monday 10 February 2025 09:37:06 +0000 (0:00:00.122) 0:00:48.036 ******* 2025-02-10 09:38:56.649179 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:38:56.649193 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:38:56.649207 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:38:56.649221 | orchestrator | 2025-02-10 09:38:56.649235 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-02-10 09:38:56.649248 | orchestrator | Monday 10 February 2025 09:37:06 +0000 (0:00:00.365) 0:00:48.401 ******* 2025-02-10 09:38:56.649268 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:38:56.649283 | orchestrator | 2025-02-10 09:38:56.649296 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-02-10 09:38:56.649310 | orchestrator | Monday 10 February 2025 09:37:07 +0000 (0:00:00.803) 0:00:49.205 ******* 2025-02-10 09:38:56.649335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-10 09:38:56.649352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-10 09:38:56.649376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-10 09:38:56.649393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:56.649414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:56.649427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:56.649440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:56.649460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:56.649474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:56.649486 | orchestrator | 2025-02-10 09:38:56.649499 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-02-10 09:38:56.649512 | orchestrator | Monday 10 February 2025 09:37:13 +0000 (0:00:06.037) 0:00:55.243 ******* 2025-02-10 09:38:56.649525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-02-10 09:38:56.649546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:38:56.649559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:38:56.649579 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:38:56.649592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-02-10 09:38:56.649606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:38:56.649619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:38:56.649632 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:38:56.649656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-02-10 09:38:56.649684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:38:56.649704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:38:56.649724 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:38:56.649737 | orchestrator | 2025-02-10 09:38:56.649749 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-02-10 09:38:56.649762 | orchestrator | Monday 10 February 2025 09:37:17 +0000 (0:00:03.809) 0:00:59.052 ******* 2025-02-10 09:38:56.649775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-02-10 09:38:56.649788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:38:56.649809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:38:56.649830 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:38:56.649842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-02-10 09:38:56.649856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:38:56.649868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:38:56.649881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-02-10 09:38:56.649894 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:38:56.649914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:38:56.649933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:38:56.649945 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:38:56.649958 | orchestrator | 2025-02-10 09:38:56.649970 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-02-10 09:38:56.649982 | orchestrator | Monday 10 February 2025 09:37:19 +0000 (0:00:02.390) 0:01:01.442 ******* 2025-02-10 09:38:56.649995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-10 09:38:56.650009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-10 09:38:56.650071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-10 09:38:56.650100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:56.650115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:56.650127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:56.650141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:56.650154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:56.650167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:56.650186 | orchestrator | 2025-02-10 09:38:56.650203 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-02-10 09:38:56.650216 | orchestrator | Monday 10 February 2025 09:37:25 +0000 (0:00:05.937) 0:01:07.380 ******* 2025-02-10 09:38:56.650228 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:38:56.650241 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:38:56.650253 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:38:56.650265 | orchestrator | 2025-02-10 09:38:56.650277 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-02-10 09:38:56.650290 | orchestrator | Monday 10 February 2025 09:37:30 +0000 (0:00:04.752) 0:01:12.133 ******* 2025-02-10 09:38:56.650302 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-10 09:38:56.650314 | orchestrator | 2025-02-10 09:38:56.650326 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-02-10 09:38:56.650338 | orchestrator | Monday 10 February 2025 09:37:32 +0000 (0:00:02.099) 0:01:14.233 ******* 2025-02-10 09:38:56.650350 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:38:56.650362 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:38:56.650374 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:38:56.650386 | orchestrator | 2025-02-10 09:38:56.650399 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-02-10 09:38:56.650416 | orchestrator | Monday 10 February 2025 09:37:34 +0000 (0:00:02.224) 0:01:16.457 ******* 2025-02-10 09:38:56.650429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-10 09:38:56.650444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-10 09:38:56.650458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-10 09:38:56.650485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:56.650503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:56.650523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:56.650536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:56.650549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:56.650569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:56.650581 | orchestrator | 2025-02-10 09:38:56.650594 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-02-10 09:38:56.650606 | orchestrator | Monday 10 February 2025 09:37:48 +0000 (0:00:13.810) 0:01:30.267 ******* 2025-02-10 09:38:56.650625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-02-10 09:38:56.650639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:38:56.650651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:38:56.650664 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:38:56.650677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-02-10 09:38:56.650697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:38:56.650716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:38:56.650729 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:38:56.650742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-02-10 09:38:56.650756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:38:56.650768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:38:56.650789 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:38:56.650801 | orchestrator | 2025-02-10 09:38:56.650813 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-02-10 09:38:56.650825 | orchestrator | Monday 10 February 2025 09:37:50 +0000 (0:00:01.789) 0:01:32.057 ******* 2025-02-10 09:38:56.650838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-10 09:38:56.650858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-10 09:38:56.650872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:56.650885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-10 09:38:56.650905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:56.650917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:56.650935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:56.650949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:56.650962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:56.650975 | orchestrator | 2025-02-10 09:38:56.650988 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-02-10 09:38:56.651000 | orchestrator | Monday 10 February 2025 09:37:56 +0000 (0:00:06.490) 0:01:38.548 ******* 2025-02-10 09:38:56.651018 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:38:56.651031 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:38:56.651096 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:38:56.651110 | orchestrator | 2025-02-10 09:38:56.651123 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-02-10 09:38:56.651135 | orchestrator | Monday 10 February 2025 09:37:58 +0000 (0:00:01.113) 0:01:39.661 ******* 2025-02-10 09:38:56.651148 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:38:56.651160 | orchestrator | 2025-02-10 09:38:56.651172 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-02-10 09:38:56.651185 | orchestrator | Monday 10 February 2025 09:38:01 +0000 (0:00:03.485) 0:01:43.146 ******* 2025-02-10 09:38:56.651197 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:38:56.651209 | orchestrator | 2025-02-10 09:38:56.651221 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-02-10 09:38:56.651234 | orchestrator | Monday 10 February 2025 09:38:04 +0000 (0:00:02.826) 0:01:45.972 ******* 2025-02-10 09:38:56.651246 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:38:56.651258 | orchestrator | 2025-02-10 09:38:56.651270 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-02-10 09:38:56.651282 | orchestrator | Monday 10 February 2025 09:38:16 +0000 (0:00:11.660) 0:01:57.633 ******* 2025-02-10 09:38:56.651295 | orchestrator | 2025-02-10 09:38:56.651306 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-02-10 09:38:56.651317 | orchestrator | Monday 10 February 2025 09:38:16 +0000 (0:00:00.175) 0:01:57.808 ******* 2025-02-10 09:38:56.651327 | orchestrator | 2025-02-10 09:38:56.651337 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-02-10 09:38:56.651347 | orchestrator | Monday 10 February 2025 09:38:16 +0000 (0:00:00.698) 0:01:58.507 ******* 2025-02-10 09:38:56.651356 | orchestrator | 2025-02-10 09:38:56.651366 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-02-10 09:38:56.651376 | orchestrator | Monday 10 February 2025 09:38:17 +0000 (0:00:00.225) 0:01:58.733 ******* 2025-02-10 09:38:56.651386 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:38:56.651396 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:38:56.651407 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:38:56.651417 | orchestrator | 2025-02-10 09:38:56.651427 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-02-10 09:38:56.651436 | orchestrator | Monday 10 February 2025 09:38:30 +0000 (0:00:13.484) 0:02:12.221 ******* 2025-02-10 09:38:56.651447 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:38:56.651457 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:38:56.651467 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:38:56.651477 | orchestrator | 2025-02-10 09:38:56.651487 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-02-10 09:38:56.651497 | orchestrator | Monday 10 February 2025 09:38:44 +0000 (0:00:13.443) 0:02:25.665 ******* 2025-02-10 09:38:56.651507 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:38:56.651517 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:38:56.651527 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:38:56.651543 | orchestrator | 2025-02-10 09:38:56.651558 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:38:59.708354 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-02-10 09:38:59.708497 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-10 09:38:59.708517 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-10 09:38:59.708532 | orchestrator | 2025-02-10 09:38:59.708655 | orchestrator | 2025-02-10 09:38:59.708673 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:38:59.708720 | orchestrator | Monday 10 February 2025 09:38:53 +0000 (0:00:09.845) 0:02:35.510 ******* 2025-02-10 09:38:59.708735 | orchestrator | =============================================================================== 2025-02-10 09:38:59.708749 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 17.23s 2025-02-10 09:38:59.708764 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 13.81s 2025-02-10 09:38:59.708778 | orchestrator | barbican : Restart barbican-api container ------------------------------ 13.48s 2025-02-10 09:38:59.708808 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 13.44s 2025-02-10 09:38:59.708822 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.66s 2025-02-10 09:38:59.708836 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 9.85s 2025-02-10 09:38:59.708850 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.34s 2025-02-10 09:38:59.708864 | orchestrator | barbican : Check barbican containers ------------------------------------ 6.49s 2025-02-10 09:38:59.708878 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 6.04s 2025-02-10 09:38:59.708891 | orchestrator | barbican : Copying over config.json files for services ------------------ 5.94s 2025-02-10 09:38:59.708905 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 4.75s 2025-02-10 09:38:59.708919 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 4.34s 2025-02-10 09:38:59.708933 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.06s 2025-02-10 09:38:59.708947 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 3.81s 2025-02-10 09:38:59.708962 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 3.56s 2025-02-10 09:38:59.708976 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.55s 2025-02-10 09:38:59.708989 | orchestrator | barbican : Creating barbican database ----------------------------------- 3.49s 2025-02-10 09:38:59.709003 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.20s 2025-02-10 09:38:59.709017 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.83s 2025-02-10 09:38:59.709031 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 2.39s 2025-02-10 09:38:59.709153 | orchestrator | 2025-02-10 09:38:56 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:38:59.709173 | orchestrator | 2025-02-10 09:38:56 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:59.709208 | orchestrator | 2025-02-10 09:38:59 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:38:59.714528 | orchestrator | 2025-02-10 09:38:59 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:38:59.714572 | orchestrator | 2025-02-10 09:38:59 | INFO  | Task 581808cb-bca8-4f2c-82e6-5026adee609e is in state STARTED 2025-02-10 09:38:59.719681 | orchestrator | 2025-02-10 09:38:59 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:39:02.780751 | orchestrator | 2025-02-10 09:38:59 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:39:02.780908 | orchestrator | 2025-02-10 09:39:02 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:39:02.781353 | orchestrator | 2025-02-10 09:39:02 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:39:02.783415 | orchestrator | 2025-02-10 09:39:02 | INFO  | Task 581808cb-bca8-4f2c-82e6-5026adee609e is in state STARTED 2025-02-10 09:39:02.783484 | orchestrator | 2025-02-10 09:39:02 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:39:05.828813 | orchestrator | 2025-02-10 09:39:02 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:39:05.829270 | orchestrator | 2025-02-10 09:39:05 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:39:05.829322 | orchestrator | 2025-02-10 09:39:05 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:39:05.830242 | orchestrator | 2025-02-10 09:39:05 | INFO  | Task 581808cb-bca8-4f2c-82e6-5026adee609e is in state STARTED 2025-02-10 09:39:05.833240 | orchestrator | 2025-02-10 09:39:05 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:39:08.883516 | orchestrator | 2025-02-10 09:39:05 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:39:08.883670 | orchestrator | 2025-02-10 09:39:08 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:39:08.883846 | orchestrator | 2025-02-10 09:39:08 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:39:08.885836 | orchestrator | 2025-02-10 09:39:08 | INFO  | Task 581808cb-bca8-4f2c-82e6-5026adee609e is in state STARTED 2025-02-10 09:39:08.887240 | orchestrator | 2025-02-10 09:39:08 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:39:11.936627 | orchestrator | 2025-02-10 09:39:08 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:39:11.936794 | orchestrator | 2025-02-10 09:39:11 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:39:11.938808 | orchestrator | 2025-02-10 09:39:11 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:39:11.938889 | orchestrator | 2025-02-10 09:39:11 | INFO  | Task 581808cb-bca8-4f2c-82e6-5026adee609e is in state STARTED 2025-02-10 09:39:11.941491 | orchestrator | 2025-02-10 09:39:11 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:39:14.997442 | orchestrator | 2025-02-10 09:39:11 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:39:14.997743 | orchestrator | 2025-02-10 09:39:14 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:39:15.000276 | orchestrator | 2025-02-10 09:39:14 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:39:15.000349 | orchestrator | 2025-02-10 09:39:14 | INFO  | Task 581808cb-bca8-4f2c-82e6-5026adee609e is in state STARTED 2025-02-10 09:39:15.000702 | orchestrator | 2025-02-10 09:39:15 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:39:15.000737 | orchestrator | 2025-02-10 09:39:15 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:39:18.077114 | orchestrator | 2025-02-10 09:39:18 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:39:18.077710 | orchestrator | 2025-02-10 09:39:18 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:39:18.080398 | orchestrator | 2025-02-10 09:39:18 | INFO  | Task 581808cb-bca8-4f2c-82e6-5026adee609e is in state STARTED 2025-02-10 09:39:18.082538 | orchestrator | 2025-02-10 09:39:18 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:39:21.124123 | orchestrator | 2025-02-10 09:39:18 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:39:21.124289 | orchestrator | 2025-02-10 09:39:21 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:39:21.126347 | orchestrator | 2025-02-10 09:39:21 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:39:21.126381 | orchestrator | 2025-02-10 09:39:21 | INFO  | Task 581808cb-bca8-4f2c-82e6-5026adee609e is in state STARTED 2025-02-10 09:39:21.127098 | orchestrator | 2025-02-10 09:39:21 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:39:24.164391 | orchestrator | 2025-02-10 09:39:21 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:39:24.164550 | orchestrator | 2025-02-10 09:39:24 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:39:24.165280 | orchestrator | 2025-02-10 09:39:24 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:39:24.165320 | orchestrator | 2025-02-10 09:39:24 | INFO  | Task 581808cb-bca8-4f2c-82e6-5026adee609e is in state STARTED 2025-02-10 09:39:24.166275 | orchestrator | 2025-02-10 09:39:24 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:39:27.214312 | orchestrator | 2025-02-10 09:39:24 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:39:27.214470 | orchestrator | 2025-02-10 09:39:27 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:39:27.217447 | orchestrator | 2025-02-10 09:39:27 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:39:27.217496 | orchestrator | 2025-02-10 09:39:27 | INFO  | Task 581808cb-bca8-4f2c-82e6-5026adee609e is in state STARTED 2025-02-10 09:39:27.218452 | orchestrator | 2025-02-10 09:39:27 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:39:30.256138 | orchestrator | 2025-02-10 09:39:27 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:39:30.256309 | orchestrator | 2025-02-10 09:39:30 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:39:30.257893 | orchestrator | 2025-02-10 09:39:30 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:39:30.261947 | orchestrator | 2025-02-10 09:39:30 | INFO  | Task 581808cb-bca8-4f2c-82e6-5026adee609e is in state STARTED 2025-02-10 09:39:30.263435 | orchestrator | 2025-02-10 09:39:30 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:39:33.315443 | orchestrator | 2025-02-10 09:39:30 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:39:33.315642 | orchestrator | 2025-02-10 09:39:33 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:39:33.328822 | orchestrator | 2025-02-10 09:39:33 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:39:33.328942 | orchestrator | 2025-02-10 09:39:33 | INFO  | Task 581808cb-bca8-4f2c-82e6-5026adee609e is in state STARTED 2025-02-10 09:39:36.383299 | orchestrator | 2025-02-10 09:39:33 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:39:36.383413 | orchestrator | 2025-02-10 09:39:33 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:39:36.383455 | orchestrator | 2025-02-10 09:39:36 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:39:36.383803 | orchestrator | 2025-02-10 09:39:36 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:39:36.383822 | orchestrator | 2025-02-10 09:39:36 | INFO  | Task 581808cb-bca8-4f2c-82e6-5026adee609e is in state STARTED 2025-02-10 09:39:36.384731 | orchestrator | 2025-02-10 09:39:36 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:39:39.438327 | orchestrator | 2025-02-10 09:39:36 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:39:39.438682 | orchestrator | 2025-02-10 09:39:39 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:39:39.439324 | orchestrator | 2025-02-10 09:39:39 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:39:39.439368 | orchestrator | 2025-02-10 09:39:39 | INFO  | Task 581808cb-bca8-4f2c-82e6-5026adee609e is in state STARTED 2025-02-10 09:39:39.440335 | orchestrator | 2025-02-10 09:39:39 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:39:39.440376 | orchestrator | 2025-02-10 09:39:39 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:39:42.489247 | orchestrator | 2025-02-10 09:39:42 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:39:42.489487 | orchestrator | 2025-02-10 09:39:42 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:39:42.489540 | orchestrator | 2025-02-10 09:39:42 | INFO  | Task 581808cb-bca8-4f2c-82e6-5026adee609e is in state STARTED 2025-02-10 09:39:42.490483 | orchestrator | 2025-02-10 09:39:42 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:39:42.492128 | orchestrator | 2025-02-10 09:39:42 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:39:45.524232 | orchestrator | 2025-02-10 09:39:45 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:39:45.526307 | orchestrator | 2025-02-10 09:39:45 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:39:45.526356 | orchestrator | 2025-02-10 09:39:45 | INFO  | Task 581808cb-bca8-4f2c-82e6-5026adee609e is in state STARTED 2025-02-10 09:39:45.527275 | orchestrator | 2025-02-10 09:39:45 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:39:48.569564 | orchestrator | 2025-02-10 09:39:45 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:39:48.569869 | orchestrator | 2025-02-10 09:39:48 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:39:48.570663 | orchestrator | 2025-02-10 09:39:48 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:39:48.570699 | orchestrator | 2025-02-10 09:39:48 | INFO  | Task 581808cb-bca8-4f2c-82e6-5026adee609e is in state STARTED 2025-02-10 09:39:48.570723 | orchestrator | 2025-02-10 09:39:48 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:39:51.616614 | orchestrator | 2025-02-10 09:39:48 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:39:51.616779 | orchestrator | 2025-02-10 09:39:51 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:39:51.619306 | orchestrator | 2025-02-10 09:39:51 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:39:51.620862 | orchestrator | 2025-02-10 09:39:51 | INFO  | Task 581808cb-bca8-4f2c-82e6-5026adee609e is in state STARTED 2025-02-10 09:39:51.623394 | orchestrator | 2025-02-10 09:39:51 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:39:54.701657 | orchestrator | 2025-02-10 09:39:51 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:39:54.701806 | orchestrator | 2025-02-10 09:39:54 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:39:54.703524 | orchestrator | 2025-02-10 09:39:54 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:39:54.703589 | orchestrator | 2025-02-10 09:39:54 | INFO  | Task 581808cb-bca8-4f2c-82e6-5026adee609e is in state STARTED 2025-02-10 09:39:54.704803 | orchestrator | 2025-02-10 09:39:54 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:39:57.747560 | orchestrator | 2025-02-10 09:39:54 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:39:57.747738 | orchestrator | 2025-02-10 09:39:57 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:39:57.749217 | orchestrator | 2025-02-10 09:39:57 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:39:57.749261 | orchestrator | 2025-02-10 09:39:57 | INFO  | Task 581808cb-bca8-4f2c-82e6-5026adee609e is in state STARTED 2025-02-10 09:39:57.749871 | orchestrator | 2025-02-10 09:39:57 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:40:00.815330 | orchestrator | 2025-02-10 09:39:57 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:00.815499 | orchestrator | 2025-02-10 09:40:00 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:40:00.819570 | orchestrator | 2025-02-10 09:40:00 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:40:00.820315 | orchestrator | 2025-02-10 09:40:00 | INFO  | Task 581808cb-bca8-4f2c-82e6-5026adee609e is in state STARTED 2025-02-10 09:40:00.820477 | orchestrator | 2025-02-10 09:40:00 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:40:03.868679 | orchestrator | 2025-02-10 09:40:00 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:03.868852 | orchestrator | 2025-02-10 09:40:03 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:40:03.869269 | orchestrator | 2025-02-10 09:40:03 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:40:03.870087 | orchestrator | 2025-02-10 09:40:03 | INFO  | Task 581808cb-bca8-4f2c-82e6-5026adee609e is in state STARTED 2025-02-10 09:40:03.871635 | orchestrator | 2025-02-10 09:40:03 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:40:06.925484 | orchestrator | 2025-02-10 09:40:03 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:06.925695 | orchestrator | 2025-02-10 09:40:06 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:40:06.926563 | orchestrator | 2025-02-10 09:40:06 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:40:06.927663 | orchestrator | 2025-02-10 09:40:06 | INFO  | Task 581808cb-bca8-4f2c-82e6-5026adee609e is in state STARTED 2025-02-10 09:40:06.929481 | orchestrator | 2025-02-10 09:40:06 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:40:09.973326 | orchestrator | 2025-02-10 09:40:06 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:09.973527 | orchestrator | 2025-02-10 09:40:09 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state STARTED 2025-02-10 09:40:09.978597 | orchestrator | 2025-02-10 09:40:09 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:40:09.978836 | orchestrator | 2025-02-10 09:40:09 | INFO  | Task 581808cb-bca8-4f2c-82e6-5026adee609e is in state STARTED 2025-02-10 09:40:09.979763 | orchestrator | 2025-02-10 09:40:09 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:40:13.034266 | orchestrator | 2025-02-10 09:40:09 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:13.034486 | orchestrator | 2025-02-10 09:40:13 | INFO  | Task 95dd8275-2d79-4ff8-896b-287d4b1729d0 is in state SUCCESS 2025-02-10 09:40:13.036261 | orchestrator | 2025-02-10 09:40:13.036705 | orchestrator | 2025-02-10 09:40:13.036728 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:40:13.036743 | orchestrator | 2025-02-10 09:40:13.036758 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:40:13.036802 | orchestrator | Monday 10 February 2025 09:36:18 +0000 (0:00:00.377) 0:00:00.377 ******* 2025-02-10 09:40:13.036818 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:40:13.036834 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:40:13.036848 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:40:13.036862 | orchestrator | 2025-02-10 09:40:13.036876 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:40:13.036891 | orchestrator | Monday 10 February 2025 09:36:18 +0000 (0:00:00.459) 0:00:00.837 ******* 2025-02-10 09:40:13.036906 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-02-10 09:40:13.037378 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-02-10 09:40:13.037393 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-02-10 09:40:13.037407 | orchestrator | 2025-02-10 09:40:13.037421 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-02-10 09:40:13.037435 | orchestrator | 2025-02-10 09:40:13.037449 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-02-10 09:40:13.037463 | orchestrator | Monday 10 February 2025 09:36:19 +0000 (0:00:00.466) 0:00:01.303 ******* 2025-02-10 09:40:13.037477 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:40:13.037629 | orchestrator | 2025-02-10 09:40:13.037647 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-02-10 09:40:13.037662 | orchestrator | Monday 10 February 2025 09:36:20 +0000 (0:00:01.247) 0:00:02.551 ******* 2025-02-10 09:40:13.037678 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-02-10 09:40:13.037694 | orchestrator | 2025-02-10 09:40:13.037709 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-02-10 09:40:13.037724 | orchestrator | Monday 10 February 2025 09:36:24 +0000 (0:00:03.978) 0:00:06.529 ******* 2025-02-10 09:40:13.037738 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-02-10 09:40:13.037752 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-02-10 09:40:13.037767 | orchestrator | 2025-02-10 09:40:13.037781 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-02-10 09:40:13.037796 | orchestrator | Monday 10 February 2025 09:36:31 +0000 (0:00:07.444) 0:00:13.974 ******* 2025-02-10 09:40:13.037810 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-02-10 09:40:13.037825 | orchestrator | 2025-02-10 09:40:13.037840 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-02-10 09:40:13.037855 | orchestrator | Monday 10 February 2025 09:36:35 +0000 (0:00:03.453) 0:00:17.427 ******* 2025-02-10 09:40:13.038424 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-02-10 09:40:13.038444 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-02-10 09:40:13.038458 | orchestrator | 2025-02-10 09:40:13.038472 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-02-10 09:40:13.038486 | orchestrator | Monday 10 February 2025 09:36:39 +0000 (0:00:03.677) 0:00:21.105 ******* 2025-02-10 09:40:13.038499 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-02-10 09:40:13.038513 | orchestrator | 2025-02-10 09:40:13.038527 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-02-10 09:40:13.038541 | orchestrator | Monday 10 February 2025 09:36:42 +0000 (0:00:02.980) 0:00:24.085 ******* 2025-02-10 09:40:13.038555 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-02-10 09:40:13.038568 | orchestrator | 2025-02-10 09:40:13.038582 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-02-10 09:40:13.038596 | orchestrator | Monday 10 February 2025 09:36:46 +0000 (0:00:04.708) 0:00:28.794 ******* 2025-02-10 09:40:13.038613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-10 09:40:13.038694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-10 09:40:13.038713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-10 09:40:13.038729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:40:13.038745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:40:13.038760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:40:13.038784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.038870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.038960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.038981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.038998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.039012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.039036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.039177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.039201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.039216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.039231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.039247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.039271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.039286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.039332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.039349 | orchestrator | 2025-02-10 09:40:13.039363 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-02-10 09:40:13.039377 | orchestrator | Monday 10 February 2025 09:36:50 +0000 (0:00:03.680) 0:00:32.474 ******* 2025-02-10 09:40:13.039391 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:40:13.039405 | orchestrator | 2025-02-10 09:40:13.039419 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-02-10 09:40:13.039431 | orchestrator | Monday 10 February 2025 09:36:50 +0000 (0:00:00.152) 0:00:32.627 ******* 2025-02-10 09:40:13.039443 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:40:13.039456 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:40:13.039468 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:40:13.039480 | orchestrator | 2025-02-10 09:40:13.039492 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-02-10 09:40:13.039504 | orchestrator | Monday 10 February 2025 09:36:51 +0000 (0:00:00.503) 0:00:33.130 ******* 2025-02-10 09:40:13.039518 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:40:13.039530 | orchestrator | 2025-02-10 09:40:13.039542 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-02-10 09:40:13.039555 | orchestrator | Monday 10 February 2025 09:36:51 +0000 (0:00:00.744) 0:00:33.875 ******* 2025-02-10 09:40:13.039567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-10 09:40:13.039588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-10 09:40:13.039603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-10 09:40:13.039647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:40:13.039664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:40:13.039679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:40:13.039710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.039725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.039740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.039780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.039798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.039813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.039827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.039849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.039864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.039879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.039922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.039938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.039953 | orchestrator | 2025-02-10 09:40:13.039966 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-02-10 09:40:13.039979 | orchestrator | Monday 10 February 2025 09:36:58 +0000 (0:00:06.743) 0:00:40.619 ******* 2025-02-10 09:40:13.039992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-10 09:40:13.040013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-10 09:40:13.040026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.040038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.040148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.040170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.040183 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:40:13.040206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-10 09:40:13.040219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-10 09:40:13.040232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.040245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.040287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.040302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.040321 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:40:13.040335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-10 09:40:13.040348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-10 09:40:13.040361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.040374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.040416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.040430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.040450 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:40:13.040462 | orchestrator | 2025-02-10 09:40:13.040475 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-02-10 09:40:13.040487 | orchestrator | Monday 10 February 2025 09:36:59 +0000 (0:00:01.003) 0:00:41.622 ******* 2025-02-10 09:40:13.040500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-10 09:40:13.040513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-10 09:40:13.040526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.040539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.040596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.040618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.040631 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:40:13.040644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-10 09:40:13.040657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-10 09:40:13.040670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.040682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.040716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.040734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.040744 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:40:13.040755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-10 09:40:13.040766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-10 09:40:13.040776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.040787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.040820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.040838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.040849 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:40:13.040859 | orchestrator | 2025-02-10 09:40:13.040869 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-02-10 09:40:13.040879 | orchestrator | Monday 10 February 2025 09:37:01 +0000 (0:00:01.462) 0:00:43.085 ******* 2025-02-10 09:40:13.040890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-10 09:40:13.040900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-10 09:40:13.040911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-10 09:40:13.040949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:40:13.040961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:40:13.040972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:40:13.040983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.040993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.041004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.041038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.041077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.041090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.041101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.041111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.041122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.041132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.041172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.041185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.041201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.041212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.041223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.041233 | orchestrator | 2025-02-10 09:40:13.041243 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-02-10 09:40:13.041253 | orchestrator | Monday 10 February 2025 09:37:09 +0000 (0:00:08.356) 0:00:51.446 ******* 2025-02-10 09:40:13.041264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-10 09:40:13.041302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-10 09:40:13.041314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-10 09:40:13.041325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:40:13.041335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:40:13.041346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:40:13.041364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.041397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.041409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.041420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.041430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.041442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.041459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.041493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.041505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.041516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.041526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.041537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.041579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.041596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.041629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.041641 | orchestrator | 2025-02-10 09:40:13.041652 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-02-10 09:40:13.041662 | orchestrator | Monday 10 February 2025 09:37:42 +0000 (0:00:32.965) 0:01:24.412 ******* 2025-02-10 09:40:13.041672 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-02-10 09:40:13.041682 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-02-10 09:40:13.041693 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-02-10 09:40:13.041703 | orchestrator | 2025-02-10 09:40:13.041713 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-02-10 09:40:13.041723 | orchestrator | Monday 10 February 2025 09:37:55 +0000 (0:00:13.293) 0:01:37.705 ******* 2025-02-10 09:40:13.041733 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-02-10 09:40:13.041743 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-02-10 09:40:13.041752 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-02-10 09:40:13.041762 | orchestrator | 2025-02-10 09:40:13.041772 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-02-10 09:40:13.041787 | orchestrator | Monday 10 February 2025 09:38:02 +0000 (0:00:07.024) 0:01:44.729 ******* 2025-02-10 09:40:13.041797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-10 09:40:13.041813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-10 09:40:13.041833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-10 09:40:13.041848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:40:13.041859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.041870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.041880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.041902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:40:13.041913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.041938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.041950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.041960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:40:13.041971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.041987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.041998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.042043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.042090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.042104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.042115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.042132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.042151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.042162 | orchestrator | 2025-02-10 09:40:13.042172 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-02-10 09:40:13.042182 | orchestrator | Monday 10 February 2025 09:38:09 +0000 (0:00:06.602) 0:01:51.332 ******* 2025-02-10 09:40:13.042192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-10 09:40:13.042213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-10 09:40:13.042224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-10 09:40:13.042241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:40:13.042251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.042271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.042287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.042298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:40:13.042308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.042325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.042335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.042353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:40:13.042364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.042381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.042392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.042407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.042418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.042436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.042447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.042462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.042473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.042483 | orchestrator | 2025-02-10 09:40:13.042494 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-02-10 09:40:13.042504 | orchestrator | Monday 10 February 2025 09:38:13 +0000 (0:00:03.859) 0:01:55.191 ******* 2025-02-10 09:40:13.042520 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:40:13.042530 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:40:13.042540 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:40:13.042550 | orchestrator | 2025-02-10 09:40:13.042560 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-02-10 09:40:13.042570 | orchestrator | Monday 10 February 2025 09:38:13 +0000 (0:00:00.605) 0:01:55.797 ******* 2025-02-10 09:40:13.042580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-10 09:40:13.042598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-10 09:40:13.042609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.042620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.042635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.042646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-10 09:40:13.042662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-10 09:40:13.042679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.042691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.042701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.042716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.042727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.042744 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:40:13.042754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.042764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.042795 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:40:13.042806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-10 09:40:13.042816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-10 09:40:13.042831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.042847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.042858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.042877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.042888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.042899 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:40:13.042909 | orchestrator | 2025-02-10 09:40:13.042919 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-02-10 09:40:13.042929 | orchestrator | Monday 10 February 2025 09:38:15 +0000 (0:00:01.306) 0:01:57.104 ******* 2025-02-10 09:40:13.042940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-10 09:40:13.042955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-10 09:40:13.042978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-10 09:40:13.042989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:40:13.043000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:40:13.043010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:40:13.043024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.043050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.043112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.043124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.043135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.043146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.043156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.043183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.043199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.043210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.043220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.043231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.043241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.043251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:13.043280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:40:13.043292 | orchestrator | 2025-02-10 09:40:13.043302 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-02-10 09:40:13.043313 | orchestrator | Monday 10 February 2025 09:38:23 +0000 (0:00:08.682) 0:02:05.786 ******* 2025-02-10 09:40:13.043323 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:40:13.043333 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:40:13.043343 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:40:13.043353 | orchestrator | 2025-02-10 09:40:13.043363 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-02-10 09:40:13.043373 | orchestrator | Monday 10 February 2025 09:38:24 +0000 (0:00:01.247) 0:02:07.034 ******* 2025-02-10 09:40:13.043383 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-02-10 09:40:13.043394 | orchestrator | 2025-02-10 09:40:13.043404 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-02-10 09:40:13.043413 | orchestrator | Monday 10 February 2025 09:38:27 +0000 (0:00:02.577) 0:02:09.611 ******* 2025-02-10 09:40:13.043423 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-02-10 09:40:13.043433 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-02-10 09:40:13.043443 | orchestrator | 2025-02-10 09:40:13.043452 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-02-10 09:40:13.043460 | orchestrator | Monday 10 February 2025 09:38:30 +0000 (0:00:02.469) 0:02:12.081 ******* 2025-02-10 09:40:13.043468 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:40:13.043477 | orchestrator | 2025-02-10 09:40:13.043485 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-02-10 09:40:13.043494 | orchestrator | Monday 10 February 2025 09:38:48 +0000 (0:00:18.541) 0:02:30.622 ******* 2025-02-10 09:40:13.043502 | orchestrator | 2025-02-10 09:40:13.043511 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-02-10 09:40:13.043519 | orchestrator | Monday 10 February 2025 09:38:48 +0000 (0:00:00.306) 0:02:30.929 ******* 2025-02-10 09:40:13.043527 | orchestrator | 2025-02-10 09:40:13.043536 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-02-10 09:40:13.043544 | orchestrator | Monday 10 February 2025 09:38:49 +0000 (0:00:00.162) 0:02:31.091 ******* 2025-02-10 09:40:13.043552 | orchestrator | 2025-02-10 09:40:13.043561 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-02-10 09:40:13.043569 | orchestrator | Monday 10 February 2025 09:38:49 +0000 (0:00:00.152) 0:02:31.243 ******* 2025-02-10 09:40:13.043578 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:40:13.043586 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:40:13.043595 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:40:13.043603 | orchestrator | 2025-02-10 09:40:13.043611 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-02-10 09:40:13.043620 | orchestrator | Monday 10 February 2025 09:39:03 +0000 (0:00:14.102) 0:02:45.346 ******* 2025-02-10 09:40:13.043632 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:40:13.043641 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:40:13.043649 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:40:13.043658 | orchestrator | 2025-02-10 09:40:13.043666 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-02-10 09:40:13.043675 | orchestrator | Monday 10 February 2025 09:39:16 +0000 (0:00:13.422) 0:02:58.769 ******* 2025-02-10 09:40:13.043683 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:40:13.043692 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:40:13.043700 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:40:13.043708 | orchestrator | 2025-02-10 09:40:13.043717 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-02-10 09:40:13.043725 | orchestrator | Monday 10 February 2025 09:39:26 +0000 (0:00:09.626) 0:03:08.395 ******* 2025-02-10 09:40:13.043734 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:40:13.043742 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:40:13.043751 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:40:13.043759 | orchestrator | 2025-02-10 09:40:13.043767 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-02-10 09:40:13.043776 | orchestrator | Monday 10 February 2025 09:39:37 +0000 (0:00:10.690) 0:03:19.085 ******* 2025-02-10 09:40:13.043784 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:40:13.043793 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:40:13.043801 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:40:13.043809 | orchestrator | 2025-02-10 09:40:13.043818 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-02-10 09:40:13.043826 | orchestrator | Monday 10 February 2025 09:39:50 +0000 (0:00:13.519) 0:03:32.604 ******* 2025-02-10 09:40:13.043834 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:40:13.043843 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:40:13.043851 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:40:13.043860 | orchestrator | 2025-02-10 09:40:13.043872 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-02-10 09:40:13.043880 | orchestrator | Monday 10 February 2025 09:40:06 +0000 (0:00:16.297) 0:03:48.902 ******* 2025-02-10 09:40:13.043889 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:40:13.043897 | orchestrator | 2025-02-10 09:40:13.043905 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:40:13.043914 | orchestrator | testbed-node-0 : ok=29  changed=24  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-02-10 09:40:13.043923 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-10 09:40:13.043935 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-10 09:40:16.073806 | orchestrator | 2025-02-10 09:40:16.073979 | orchestrator | 2025-02-10 09:40:16.074012 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:40:16.074143 | orchestrator | Monday 10 February 2025 09:40:12 +0000 (0:00:05.786) 0:03:54.689 ******* 2025-02-10 09:40:16.074164 | orchestrator | =============================================================================== 2025-02-10 09:40:16.074184 | orchestrator | designate : Copying over designate.conf -------------------------------- 32.97s 2025-02-10 09:40:16.074204 | orchestrator | designate : Running Designate bootstrap container ---------------------- 18.54s 2025-02-10 09:40:16.074225 | orchestrator | designate : Restart designate-worker container ------------------------- 16.30s 2025-02-10 09:40:16.074245 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 14.10s 2025-02-10 09:40:16.074266 | orchestrator | designate : Restart designate-mdns container --------------------------- 13.52s 2025-02-10 09:40:16.074286 | orchestrator | designate : Restart designate-api container ---------------------------- 13.42s 2025-02-10 09:40:16.074360 | orchestrator | designate : Copying over pools.yaml ------------------------------------ 13.29s 2025-02-10 09:40:16.074378 | orchestrator | designate : Restart designate-producer container ----------------------- 10.69s 2025-02-10 09:40:16.074399 | orchestrator | designate : Restart designate-central container ------------------------- 9.63s 2025-02-10 09:40:16.074420 | orchestrator | designate : Check designate containers ---------------------------------- 8.68s 2025-02-10 09:40:16.074592 | orchestrator | designate : Copying over config.json files for services ----------------- 8.36s 2025-02-10 09:40:16.074616 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.44s 2025-02-10 09:40:16.074637 | orchestrator | designate : Copying over named.conf ------------------------------------- 7.02s 2025-02-10 09:40:16.074657 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.74s 2025-02-10 09:40:16.074678 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 6.60s 2025-02-10 09:40:16.074700 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 5.79s 2025-02-10 09:40:16.074722 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.71s 2025-02-10 09:40:16.074742 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.98s 2025-02-10 09:40:16.074760 | orchestrator | designate : Copying over rndc.key --------------------------------------- 3.86s 2025-02-10 09:40:16.074773 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.68s 2025-02-10 09:40:16.074786 | orchestrator | 2025-02-10 09:40:13 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:40:16.074800 | orchestrator | 2025-02-10 09:40:13 | INFO  | Task 581808cb-bca8-4f2c-82e6-5026adee609e is in state STARTED 2025-02-10 09:40:16.074812 | orchestrator | 2025-02-10 09:40:13 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:40:16.074825 | orchestrator | 2025-02-10 09:40:13 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:16.074857 | orchestrator | 2025-02-10 09:40:16 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:40:16.075513 | orchestrator | 2025-02-10 09:40:16 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:40:16.075557 | orchestrator | 2025-02-10 09:40:16 | INFO  | Task 581808cb-bca8-4f2c-82e6-5026adee609e is in state STARTED 2025-02-10 09:40:16.075589 | orchestrator | 2025-02-10 09:40:16 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:40:19.107702 | orchestrator | 2025-02-10 09:40:16 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:19.107896 | orchestrator | 2025-02-10 09:40:19 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:40:19.108670 | orchestrator | 2025-02-10 09:40:19 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:40:19.108804 | orchestrator | 2025-02-10 09:40:19 | INFO  | Task 581808cb-bca8-4f2c-82e6-5026adee609e is in state STARTED 2025-02-10 09:40:19.109049 | orchestrator | 2025-02-10 09:40:19 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:40:19.109193 | orchestrator | 2025-02-10 09:40:19 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:22.150597 | orchestrator | 2025-02-10 09:40:22 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:40:22.151522 | orchestrator | 2025-02-10 09:40:22 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:40:22.151924 | orchestrator | 2025-02-10 09:40:22 | INFO  | Task 581808cb-bca8-4f2c-82e6-5026adee609e is in state STARTED 2025-02-10 09:40:22.152451 | orchestrator | 2025-02-10 09:40:22 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:40:25.208453 | orchestrator | 2025-02-10 09:40:22 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:25.208616 | orchestrator | 2025-02-10 09:40:25 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:40:28.253512 | orchestrator | 2025-02-10 09:40:25 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:40:28.253654 | orchestrator | 2025-02-10 09:40:25 | INFO  | Task 581808cb-bca8-4f2c-82e6-5026adee609e is in state STARTED 2025-02-10 09:40:28.253673 | orchestrator | 2025-02-10 09:40:25 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:40:28.253690 | orchestrator | 2025-02-10 09:40:25 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:28.253724 | orchestrator | 2025-02-10 09:40:28 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:40:28.255407 | orchestrator | 2025-02-10 09:40:28 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:40:28.255464 | orchestrator | 2025-02-10 09:40:28 | INFO  | Task 581808cb-bca8-4f2c-82e6-5026adee609e is in state STARTED 2025-02-10 09:40:28.255501 | orchestrator | 2025-02-10 09:40:28 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:40:31.308298 | orchestrator | 2025-02-10 09:40:28 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:31.308411 | orchestrator | 2025-02-10 09:40:31 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:40:31.309141 | orchestrator | 2025-02-10 09:40:31 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:40:31.309157 | orchestrator | 2025-02-10 09:40:31 | INFO  | Task 581808cb-bca8-4f2c-82e6-5026adee609e is in state STARTED 2025-02-10 09:40:31.310098 | orchestrator | 2025-02-10 09:40:31 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:40:31.310114 | orchestrator | 2025-02-10 09:40:31 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:34.344972 | orchestrator | 2025-02-10 09:40:34 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:40:34.347884 | orchestrator | 2025-02-10 09:40:34 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:40:34.347938 | orchestrator | 2025-02-10 09:40:34 | INFO  | Task 581808cb-bca8-4f2c-82e6-5026adee609e is in state STARTED 2025-02-10 09:40:34.348674 | orchestrator | 2025-02-10 09:40:34 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:40:37.381731 | orchestrator | 2025-02-10 09:40:34 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:37.381896 | orchestrator | 2025-02-10 09:40:37 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:40:40.425923 | orchestrator | 2025-02-10 09:40:37 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:40:40.426146 | orchestrator | 2025-02-10 09:40:37 | INFO  | Task 581808cb-bca8-4f2c-82e6-5026adee609e is in state STARTED 2025-02-10 09:40:40.426168 | orchestrator | 2025-02-10 09:40:37 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:40:40.426185 | orchestrator | 2025-02-10 09:40:37 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:40.426217 | orchestrator | 2025-02-10 09:40:40 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:40:40.426622 | orchestrator | 2025-02-10 09:40:40 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:40:40.426653 | orchestrator | 2025-02-10 09:40:40 | INFO  | Task 581808cb-bca8-4f2c-82e6-5026adee609e is in state STARTED 2025-02-10 09:40:40.426711 | orchestrator | 2025-02-10 09:40:40 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:40:43.460755 | orchestrator | 2025-02-10 09:40:40 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:43.460922 | orchestrator | 2025-02-10 09:40:43 | INFO  | Task f131b928-70b5-4337-9d7c-98fc97870ec8 is in state STARTED 2025-02-10 09:40:43.461414 | orchestrator | 2025-02-10 09:40:43 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:40:43.463233 | orchestrator | 2025-02-10 09:40:43 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:40:43.464009 | orchestrator | 2025-02-10 09:40:43 | INFO  | Task 581808cb-bca8-4f2c-82e6-5026adee609e is in state SUCCESS 2025-02-10 09:40:43.465457 | orchestrator | 2025-02-10 09:40:43.465491 | orchestrator | 2025-02-10 09:40:43.465502 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:40:43.465513 | orchestrator | 2025-02-10 09:40:43.465524 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:40:43.465534 | orchestrator | Monday 10 February 2025 09:39:06 +0000 (0:00:01.676) 0:00:01.676 ******* 2025-02-10 09:40:43.465545 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:40:43.465557 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:40:43.465568 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:40:43.465578 | orchestrator | 2025-02-10 09:40:43.465589 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:40:43.465599 | orchestrator | Monday 10 February 2025 09:39:09 +0000 (0:00:02.187) 0:00:03.864 ******* 2025-02-10 09:40:43.465610 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-02-10 09:40:43.465621 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-02-10 09:40:43.465631 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-02-10 09:40:43.465641 | orchestrator | 2025-02-10 09:40:43.465651 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-02-10 09:40:43.465661 | orchestrator | 2025-02-10 09:40:43.465688 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-02-10 09:40:43.465699 | orchestrator | Monday 10 February 2025 09:39:10 +0000 (0:00:01.360) 0:00:05.225 ******* 2025-02-10 09:40:43.465709 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:40:43.465720 | orchestrator | 2025-02-10 09:40:43.465731 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-02-10 09:40:43.465741 | orchestrator | Monday 10 February 2025 09:39:12 +0000 (0:00:01.858) 0:00:07.083 ******* 2025-02-10 09:40:43.465750 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-02-10 09:40:43.465761 | orchestrator | 2025-02-10 09:40:43.465771 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-02-10 09:40:43.465780 | orchestrator | Monday 10 February 2025 09:39:16 +0000 (0:00:04.262) 0:00:11.346 ******* 2025-02-10 09:40:43.465790 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-02-10 09:40:43.465801 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-02-10 09:40:43.465811 | orchestrator | 2025-02-10 09:40:43.465821 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-02-10 09:40:43.465831 | orchestrator | Monday 10 February 2025 09:39:23 +0000 (0:00:06.738) 0:00:18.085 ******* 2025-02-10 09:40:43.465841 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-02-10 09:40:43.465851 | orchestrator | 2025-02-10 09:40:43.465862 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-02-10 09:40:43.465872 | orchestrator | Monday 10 February 2025 09:39:27 +0000 (0:00:04.482) 0:00:22.567 ******* 2025-02-10 09:40:43.465906 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-02-10 09:40:43.465916 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-02-10 09:40:43.465927 | orchestrator | 2025-02-10 09:40:43.465937 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-02-10 09:40:43.465947 | orchestrator | Monday 10 February 2025 09:39:32 +0000 (0:00:04.712) 0:00:27.280 ******* 2025-02-10 09:40:43.465962 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-02-10 09:40:43.465975 | orchestrator | 2025-02-10 09:40:43.465993 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-02-10 09:40:43.466009 | orchestrator | Monday 10 February 2025 09:39:36 +0000 (0:00:03.913) 0:00:31.193 ******* 2025-02-10 09:40:43.466149 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-02-10 09:40:43.466167 | orchestrator | 2025-02-10 09:40:43.466185 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-02-10 09:40:43.466202 | orchestrator | Monday 10 February 2025 09:39:41 +0000 (0:00:05.261) 0:00:36.455 ******* 2025-02-10 09:40:43.466219 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:40:43.466238 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:40:43.466258 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:40:43.466276 | orchestrator | 2025-02-10 09:40:43.466293 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-02-10 09:40:43.466310 | orchestrator | Monday 10 February 2025 09:39:42 +0000 (0:00:00.819) 0:00:37.275 ******* 2025-02-10 09:40:43.466325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-10 09:40:43.466358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-10 09:40:43.466371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-10 09:40:43.466394 | orchestrator | 2025-02-10 09:40:43.466405 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-02-10 09:40:43.466417 | orchestrator | Monday 10 February 2025 09:39:43 +0000 (0:00:01.277) 0:00:38.553 ******* 2025-02-10 09:40:43.466428 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:40:43.466438 | orchestrator | 2025-02-10 09:40:43.466448 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-02-10 09:40:43.466458 | orchestrator | Monday 10 February 2025 09:39:44 +0000 (0:00:00.390) 0:00:38.944 ******* 2025-02-10 09:40:43.466468 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:40:43.466478 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:40:43.466495 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:40:43.466505 | orchestrator | 2025-02-10 09:40:43.466515 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-02-10 09:40:43.466526 | orchestrator | Monday 10 February 2025 09:39:44 +0000 (0:00:00.490) 0:00:39.434 ******* 2025-02-10 09:40:43.466536 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:40:43.466546 | orchestrator | 2025-02-10 09:40:43.466556 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-02-10 09:40:43.466566 | orchestrator | Monday 10 February 2025 09:39:45 +0000 (0:00:01.377) 0:00:40.812 ******* 2025-02-10 09:40:43.466606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-10 09:40:43.466626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-10 09:40:43.466638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-10 09:40:43.466655 | orchestrator | 2025-02-10 09:40:43.466665 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-02-10 09:40:43.466675 | orchestrator | Monday 10 February 2025 09:39:49 +0000 (0:00:03.073) 0:00:43.886 ******* 2025-02-10 09:40:43.466686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-02-10 09:40:43.466696 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:40:43.466714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-02-10 09:40:43.466725 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:40:43.466743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-02-10 09:40:43.466754 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:40:43.466764 | orchestrator | 2025-02-10 09:40:43.466774 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-02-10 09:40:43.466784 | orchestrator | Monday 10 February 2025 09:39:49 +0000 (0:00:00.684) 0:00:44.570 ******* 2025-02-10 09:40:43.466800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-02-10 09:40:43.466811 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:40:43.466821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-02-10 09:40:43.466832 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:40:43.466842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-02-10 09:40:43.466852 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:40:43.466862 | orchestrator | 2025-02-10 09:40:43.466872 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-02-10 09:40:43.466882 | orchestrator | Monday 10 February 2025 09:39:52 +0000 (0:00:02.664) 0:00:47.235 ******* 2025-02-10 09:40:43.466912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-10 09:40:43.466939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-10 09:40:43.466950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-10 09:40:43.466960 | orchestrator | 2025-02-10 09:40:43.466970 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-02-10 09:40:43.466980 | orchestrator | Monday 10 February 2025 09:39:57 +0000 (0:00:04.646) 0:00:51.881 ******* 2025-02-10 09:40:43.466990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-10 09:40:43.467015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-10 09:40:43.467032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-10 09:40:43.467043 | orchestrator | 2025-02-10 09:40:43.467053 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-02-10 09:40:43.467063 | orchestrator | Monday 10 February 2025 09:40:01 +0000 (0:00:04.208) 0:00:56.089 ******* 2025-02-10 09:40:43.467099 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-02-10 09:40:43.467109 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-02-10 09:40:43.467120 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-02-10 09:40:43.467130 | orchestrator | 2025-02-10 09:40:43.467140 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-02-10 09:40:43.467150 | orchestrator | Monday 10 February 2025 09:40:04 +0000 (0:00:02.982) 0:00:59.072 ******* 2025-02-10 09:40:43.467160 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:40:43.467170 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:40:43.467180 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:40:43.467190 | orchestrator | 2025-02-10 09:40:43.467200 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-02-10 09:40:43.467210 | orchestrator | Monday 10 February 2025 09:40:07 +0000 (0:00:02.871) 0:01:01.944 ******* 2025-02-10 09:40:43.467220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-02-10 09:40:43.467231 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:40:43.467241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-02-10 09:40:43.467258 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:40:43.467284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-02-10 09:40:43.467299 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:40:43.467316 | orchestrator | 2025-02-10 09:40:43.467332 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-02-10 09:40:43.467348 | orchestrator | Monday 10 February 2025 09:40:08 +0000 (0:00:01.035) 0:01:02.980 ******* 2025-02-10 09:40:43.467366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-10 09:40:43.467383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-10 09:40:43.467402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-10 09:40:43.467428 | orchestrator | 2025-02-10 09:40:43.467446 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-02-10 09:40:43.467462 | orchestrator | Monday 10 February 2025 09:40:11 +0000 (0:00:02.843) 0:01:05.823 ******* 2025-02-10 09:40:43.467478 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:40:43.467495 | orchestrator | 2025-02-10 09:40:43.467510 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-02-10 09:40:43.467527 | orchestrator | Monday 10 February 2025 09:40:13 +0000 (0:00:02.970) 0:01:08.794 ******* 2025-02-10 09:40:43.467552 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:40:46.496726 | orchestrator | 2025-02-10 09:40:46.496873 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-02-10 09:40:46.497024 | orchestrator | Monday 10 February 2025 09:40:16 +0000 (0:00:02.408) 0:01:11.202 ******* 2025-02-10 09:40:46.497046 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:40:46.497063 | orchestrator | 2025-02-10 09:40:46.497128 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-02-10 09:40:46.497143 | orchestrator | Monday 10 February 2025 09:40:28 +0000 (0:00:12.072) 0:01:23.274 ******* 2025-02-10 09:40:46.497157 | orchestrator | 2025-02-10 09:40:46.497171 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-02-10 09:40:46.497186 | orchestrator | Monday 10 February 2025 09:40:28 +0000 (0:00:00.094) 0:01:23.368 ******* 2025-02-10 09:40:46.497199 | orchestrator | 2025-02-10 09:40:46.497214 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-02-10 09:40:46.497228 | orchestrator | Monday 10 February 2025 09:40:28 +0000 (0:00:00.374) 0:01:23.743 ******* 2025-02-10 09:40:46.497241 | orchestrator | 2025-02-10 09:40:46.497255 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-02-10 09:40:46.497270 | orchestrator | Monday 10 February 2025 09:40:29 +0000 (0:00:00.094) 0:01:23.837 ******* 2025-02-10 09:40:46.497283 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:40:46.497297 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:40:46.497311 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:40:46.497325 | orchestrator | 2025-02-10 09:40:46.497339 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:40:46.497354 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-10 09:40:46.497370 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-10 09:40:46.497384 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-10 09:40:46.497398 | orchestrator | 2025-02-10 09:40:46.497412 | orchestrator | 2025-02-10 09:40:46.497426 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:40:46.497440 | orchestrator | Monday 10 February 2025 09:40:40 +0000 (0:00:11.645) 0:01:35.483 ******* 2025-02-10 09:40:46.497453 | orchestrator | =============================================================================== 2025-02-10 09:40:46.497467 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.07s 2025-02-10 09:40:46.497482 | orchestrator | placement : Restart placement-api container ---------------------------- 11.65s 2025-02-10 09:40:46.497496 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.74s 2025-02-10 09:40:46.497530 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 5.26s 2025-02-10 09:40:46.497570 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.71s 2025-02-10 09:40:46.497585 | orchestrator | placement : Copying over config.json files for services ----------------- 4.65s 2025-02-10 09:40:46.497598 | orchestrator | service-ks-register : placement | Creating projects --------------------- 4.48s 2025-02-10 09:40:46.497612 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.26s 2025-02-10 09:40:46.497626 | orchestrator | placement : Copying over placement.conf --------------------------------- 4.21s 2025-02-10 09:40:46.497640 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.91s 2025-02-10 09:40:46.497656 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 3.07s 2025-02-10 09:40:46.497671 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 2.98s 2025-02-10 09:40:46.497686 | orchestrator | placement : Creating placement databases -------------------------------- 2.97s 2025-02-10 09:40:46.497781 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 2.87s 2025-02-10 09:40:46.497801 | orchestrator | placement : Check placement containers ---------------------------------- 2.84s 2025-02-10 09:40:46.497817 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 2.66s 2025-02-10 09:40:46.497833 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.41s 2025-02-10 09:40:46.497848 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.19s 2025-02-10 09:40:46.497863 | orchestrator | placement : include_tasks ----------------------------------------------- 1.86s 2025-02-10 09:40:46.497878 | orchestrator | placement : include_tasks ----------------------------------------------- 1.38s 2025-02-10 09:40:46.497895 | orchestrator | 2025-02-10 09:40:43 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:40:46.497911 | orchestrator | 2025-02-10 09:40:43 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:46.497946 | orchestrator | 2025-02-10 09:40:46 | INFO  | Task f131b928-70b5-4337-9d7c-98fc97870ec8 is in state SUCCESS 2025-02-10 09:40:46.499619 | orchestrator | 2025-02-10 09:40:46 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:40:46.499649 | orchestrator | 2025-02-10 09:40:46 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:40:46.499671 | orchestrator | 2025-02-10 09:40:46 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:40:49.543575 | orchestrator | 2025-02-10 09:40:46 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:49.543735 | orchestrator | 2025-02-10 09:40:49 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:40:49.545047 | orchestrator | 2025-02-10 09:40:49 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:40:49.546953 | orchestrator | 2025-02-10 09:40:49 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:40:49.548298 | orchestrator | 2025-02-10 09:40:49 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:40:49.548711 | orchestrator | 2025-02-10 09:40:49 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:52.595399 | orchestrator | 2025-02-10 09:40:52 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:40:52.596109 | orchestrator | 2025-02-10 09:40:52 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:40:52.597811 | orchestrator | 2025-02-10 09:40:52 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:40:52.599711 | orchestrator | 2025-02-10 09:40:52 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:40:55.647234 | orchestrator | 2025-02-10 09:40:52 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:55.647397 | orchestrator | 2025-02-10 09:40:55 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:40:55.649171 | orchestrator | 2025-02-10 09:40:55 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:40:55.652528 | orchestrator | 2025-02-10 09:40:55 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:40:55.652958 | orchestrator | 2025-02-10 09:40:55 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:40:55.653013 | orchestrator | 2025-02-10 09:40:55 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:58.700964 | orchestrator | 2025-02-10 09:40:58 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:41:01.733286 | orchestrator | 2025-02-10 09:40:58 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:41:01.733460 | orchestrator | 2025-02-10 09:40:58 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:41:01.733484 | orchestrator | 2025-02-10 09:40:58 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:41:01.733502 | orchestrator | 2025-02-10 09:40:58 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:01.733540 | orchestrator | 2025-02-10 09:41:01 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:41:01.734104 | orchestrator | 2025-02-10 09:41:01 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:41:01.734145 | orchestrator | 2025-02-10 09:41:01 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:41:01.735043 | orchestrator | 2025-02-10 09:41:01 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:41:04.778694 | orchestrator | 2025-02-10 09:41:01 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:04.778889 | orchestrator | 2025-02-10 09:41:04 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:41:04.780391 | orchestrator | 2025-02-10 09:41:04 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:41:04.780820 | orchestrator | 2025-02-10 09:41:04 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:41:04.781742 | orchestrator | 2025-02-10 09:41:04 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:41:07.830262 | orchestrator | 2025-02-10 09:41:04 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:07.830426 | orchestrator | 2025-02-10 09:41:07 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:41:07.831254 | orchestrator | 2025-02-10 09:41:07 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:41:07.831292 | orchestrator | 2025-02-10 09:41:07 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:41:07.832280 | orchestrator | 2025-02-10 09:41:07 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:41:10.858593 | orchestrator | 2025-02-10 09:41:07 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:10.858712 | orchestrator | 2025-02-10 09:41:10 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:41:10.859984 | orchestrator | 2025-02-10 09:41:10 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:41:10.862266 | orchestrator | 2025-02-10 09:41:10 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:41:10.862887 | orchestrator | 2025-02-10 09:41:10 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:41:10.864230 | orchestrator | 2025-02-10 09:41:10 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:13.906482 | orchestrator | 2025-02-10 09:41:13 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:41:13.909293 | orchestrator | 2025-02-10 09:41:13 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:41:13.909397 | orchestrator | 2025-02-10 09:41:13 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:41:13.911410 | orchestrator | 2025-02-10 09:41:13 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:41:16.940577 | orchestrator | 2025-02-10 09:41:13 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:16.940740 | orchestrator | 2025-02-10 09:41:16 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:41:16.941609 | orchestrator | 2025-02-10 09:41:16 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:41:16.941814 | orchestrator | 2025-02-10 09:41:16 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:41:16.942388 | orchestrator | 2025-02-10 09:41:16 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:41:19.985789 | orchestrator | 2025-02-10 09:41:16 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:19.985965 | orchestrator | 2025-02-10 09:41:19 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:41:19.993920 | orchestrator | 2025-02-10 09:41:19 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:41:19.994764 | orchestrator | 2025-02-10 09:41:19 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:41:19.994816 | orchestrator | 2025-02-10 09:41:19 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:41:23.034649 | orchestrator | 2025-02-10 09:41:19 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:23.034936 | orchestrator | 2025-02-10 09:41:23 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:41:23.034983 | orchestrator | 2025-02-10 09:41:23 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:41:23.035015 | orchestrator | 2025-02-10 09:41:23 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:41:23.036337 | orchestrator | 2025-02-10 09:41:23 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:41:26.090910 | orchestrator | 2025-02-10 09:41:23 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:26.091161 | orchestrator | 2025-02-10 09:41:26 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:41:26.092201 | orchestrator | 2025-02-10 09:41:26 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:41:26.092273 | orchestrator | 2025-02-10 09:41:26 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:41:26.092314 | orchestrator | 2025-02-10 09:41:26 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:41:29.129641 | orchestrator | 2025-02-10 09:41:26 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:29.129841 | orchestrator | 2025-02-10 09:41:29 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:41:29.130926 | orchestrator | 2025-02-10 09:41:29 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:41:29.131991 | orchestrator | 2025-02-10 09:41:29 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:41:29.132652 | orchestrator | 2025-02-10 09:41:29 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:41:29.133977 | orchestrator | 2025-02-10 09:41:29 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:32.193756 | orchestrator | 2025-02-10 09:41:32 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:41:32.193893 | orchestrator | 2025-02-10 09:41:32 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:41:32.193918 | orchestrator | 2025-02-10 09:41:32 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:41:32.194476 | orchestrator | 2025-02-10 09:41:32 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:41:35.238469 | orchestrator | 2025-02-10 09:41:32 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:35.238605 | orchestrator | 2025-02-10 09:41:35 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:41:35.242528 | orchestrator | 2025-02-10 09:41:35 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:41:35.242595 | orchestrator | 2025-02-10 09:41:35 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:41:35.242774 | orchestrator | 2025-02-10 09:41:35 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:41:38.284442 | orchestrator | 2025-02-10 09:41:35 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:38.284705 | orchestrator | 2025-02-10 09:41:38 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:41:38.285910 | orchestrator | 2025-02-10 09:41:38 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:41:38.285939 | orchestrator | 2025-02-10 09:41:38 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:41:38.285961 | orchestrator | 2025-02-10 09:41:38 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:41:41.330625 | orchestrator | 2025-02-10 09:41:38 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:41.330791 | orchestrator | 2025-02-10 09:41:41 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:41:41.331157 | orchestrator | 2025-02-10 09:41:41 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:41:41.332928 | orchestrator | 2025-02-10 09:41:41 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:41:41.333573 | orchestrator | 2025-02-10 09:41:41 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:41:44.379282 | orchestrator | 2025-02-10 09:41:41 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:44.379483 | orchestrator | 2025-02-10 09:41:44 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:41:44.379899 | orchestrator | 2025-02-10 09:41:44 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:41:44.379955 | orchestrator | 2025-02-10 09:41:44 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:41:44.382779 | orchestrator | 2025-02-10 09:41:44 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:41:47.418670 | orchestrator | 2025-02-10 09:41:44 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:47.418917 | orchestrator | 2025-02-10 09:41:47 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:41:47.419628 | orchestrator | 2025-02-10 09:41:47 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:41:47.419681 | orchestrator | 2025-02-10 09:41:47 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:41:47.419705 | orchestrator | 2025-02-10 09:41:47 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:41:50.461375 | orchestrator | 2025-02-10 09:41:47 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:50.461553 | orchestrator | 2025-02-10 09:41:50 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:41:50.462771 | orchestrator | 2025-02-10 09:41:50 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state STARTED 2025-02-10 09:41:50.465676 | orchestrator | 2025-02-10 09:41:50 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:41:50.467737 | orchestrator | 2025-02-10 09:41:50 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:41:53.526529 | orchestrator | 2025-02-10 09:41:50 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:53.526694 | orchestrator | 2025-02-10 09:41:53 | INFO  | Task f640ac78-495d-4f68-b1c0-1cfe4252168c is in state STARTED 2025-02-10 09:41:53.526945 | orchestrator | 2025-02-10 09:41:53 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:41:53.529443 | orchestrator | 2025-02-10 09:41:53 | INFO  | Task 89571048-bb98-4352-8b27-8b749980574e is in state SUCCESS 2025-02-10 09:41:53.531127 | orchestrator | 2025-02-10 09:41:53.531177 | orchestrator | 2025-02-10 09:41:53.531193 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:41:53.531209 | orchestrator | 2025-02-10 09:41:53.531224 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:41:53.531239 | orchestrator | Monday 10 February 2025 09:40:44 +0000 (0:00:00.275) 0:00:00.275 ******* 2025-02-10 09:41:53.531253 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:41:53.531269 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:41:53.531284 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:41:53.531298 | orchestrator | 2025-02-10 09:41:53.531312 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:41:53.531326 | orchestrator | Monday 10 February 2025 09:40:44 +0000 (0:00:00.375) 0:00:00.651 ******* 2025-02-10 09:41:53.531340 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-02-10 09:41:53.531354 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-02-10 09:41:53.531368 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-02-10 09:41:53.531717 | orchestrator | 2025-02-10 09:41:53.531734 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-02-10 09:41:53.531748 | orchestrator | 2025-02-10 09:41:53.531762 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-02-10 09:41:53.531777 | orchestrator | Monday 10 February 2025 09:40:45 +0000 (0:00:00.482) 0:00:01.134 ******* 2025-02-10 09:41:53.531791 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:41:53.531864 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:41:53.531879 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:41:53.531894 | orchestrator | 2025-02-10 09:41:53.531908 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:41:53.531924 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:41:53.531941 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:41:53.532376 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:41:53.532396 | orchestrator | 2025-02-10 09:41:53.532457 | orchestrator | 2025-02-10 09:41:53.532474 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:41:53.532489 | orchestrator | Monday 10 February 2025 09:40:45 +0000 (0:00:00.754) 0:00:01.888 ******* 2025-02-10 09:41:53.532504 | orchestrator | =============================================================================== 2025-02-10 09:41:53.532518 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.75s 2025-02-10 09:41:53.532534 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.48s 2025-02-10 09:41:53.532549 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.38s 2025-02-10 09:41:53.532670 | orchestrator | 2025-02-10 09:41:53.532686 | orchestrator | 2025-02-10 09:41:53.532707 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-02-10 09:41:53.532721 | orchestrator | 2025-02-10 09:41:53.533000 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-02-10 09:41:53.533021 | orchestrator | Monday 10 February 2025 09:36:17 +0000 (0:00:00.157) 0:00:00.157 ******* 2025-02-10 09:41:53.533037 | orchestrator | changed: [localhost] 2025-02-10 09:41:53.533052 | orchestrator | 2025-02-10 09:41:53.533067 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-02-10 09:41:53.533120 | orchestrator | Monday 10 February 2025 09:36:18 +0000 (0:00:00.595) 0:00:00.752 ******* 2025-02-10 09:41:53.533136 | orchestrator | changed: [localhost] 2025-02-10 09:41:53.533286 | orchestrator | 2025-02-10 09:41:53.533305 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-02-10 09:41:53.533320 | orchestrator | Monday 10 February 2025 09:36:48 +0000 (0:00:29.768) 0:00:30.521 ******* 2025-02-10 09:41:53.533333 | orchestrator | changed: [localhost] 2025-02-10 09:41:53.533347 | orchestrator | 2025-02-10 09:41:53.533361 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:41:53.533375 | orchestrator | 2025-02-10 09:41:53.533389 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:41:53.533403 | orchestrator | Monday 10 February 2025 09:36:51 +0000 (0:00:03.529) 0:00:34.050 ******* 2025-02-10 09:41:53.533416 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:41:53.533430 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:41:53.533444 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:41:53.533457 | orchestrator | 2025-02-10 09:41:53.533471 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:41:53.533485 | orchestrator | Monday 10 February 2025 09:36:52 +0000 (0:00:00.519) 0:00:34.570 ******* 2025-02-10 09:41:53.533498 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_True) 2025-02-10 09:41:53.533512 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_True) 2025-02-10 09:41:53.533526 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_True) 2025-02-10 09:41:53.533540 | orchestrator | 2025-02-10 09:41:53.533554 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-02-10 09:41:53.533567 | orchestrator | 2025-02-10 09:41:53.533581 | orchestrator | TASK [ironic : include_tasks] ************************************************** 2025-02-10 09:41:53.533595 | orchestrator | Monday 10 February 2025 09:36:53 +0000 (0:00:01.009) 0:00:35.579 ******* 2025-02-10 09:41:53.533609 | orchestrator | included: /ansible/roles/ironic/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:41:53.533623 | orchestrator | 2025-02-10 09:41:53.533637 | orchestrator | TASK [service-ks-register : ironic | Creating services] ************************ 2025-02-10 09:41:53.533651 | orchestrator | Monday 10 February 2025 09:36:54 +0000 (0:00:01.160) 0:00:36.740 ******* 2025-02-10 09:41:53.533665 | orchestrator | changed: [testbed-node-0] => (item=ironic (baremetal)) 2025-02-10 09:41:53.533679 | orchestrator | changed: [testbed-node-0] => (item=ironic-inspector (baremetal-introspection)) 2025-02-10 09:41:53.533707 | orchestrator | 2025-02-10 09:41:53.533767 | orchestrator | TASK [service-ks-register : ironic | Creating endpoints] *********************** 2025-02-10 09:41:53.533784 | orchestrator | Monday 10 February 2025 09:37:01 +0000 (0:00:07.084) 0:00:43.824 ******* 2025-02-10 09:41:53.533798 | orchestrator | changed: [testbed-node-0] => (item=ironic -> https://api-int.testbed.osism.xyz:6385 -> internal) 2025-02-10 09:41:53.533813 | orchestrator | changed: [testbed-node-0] => (item=ironic -> https://api.testbed.osism.xyz:6385 -> public) 2025-02-10 09:41:53.533879 | orchestrator | changed: [testbed-node-0] => (item=ironic-inspector -> https://api-int.testbed.osism.xyz:5050 -> internal) 2025-02-10 09:41:53.533893 | orchestrator | changed: [testbed-node-0] => (item=ironic-inspector -> https://api.testbed.osism.xyz:5050 -> public) 2025-02-10 09:41:53.533907 | orchestrator | 2025-02-10 09:41:53.533921 | orchestrator | TASK [service-ks-register : ironic | Creating projects] ************************ 2025-02-10 09:41:53.533935 | orchestrator | Monday 10 February 2025 09:37:15 +0000 (0:00:13.430) 0:00:57.255 ******* 2025-02-10 09:41:53.533949 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-02-10 09:41:53.533964 | orchestrator | 2025-02-10 09:41:53.533977 | orchestrator | TASK [service-ks-register : ironic | Creating users] *************************** 2025-02-10 09:41:53.533991 | orchestrator | Monday 10 February 2025 09:37:19 +0000 (0:00:04.181) 0:01:01.437 ******* 2025-02-10 09:41:53.534005 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-02-10 09:41:53.534079 | orchestrator | changed: [testbed-node-0] => (item=ironic -> service) 2025-02-10 09:41:53.534156 | orchestrator | changed: [testbed-node-0] => (item=ironic-inspector -> service) 2025-02-10 09:41:53.534172 | orchestrator | 2025-02-10 09:41:53.534186 | orchestrator | TASK [service-ks-register : ironic | Creating roles] *************************** 2025-02-10 09:41:53.534199 | orchestrator | Monday 10 February 2025 09:37:27 +0000 (0:00:08.111) 0:01:09.550 ******* 2025-02-10 09:41:53.534213 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-02-10 09:41:53.534227 | orchestrator | 2025-02-10 09:41:53.534241 | orchestrator | TASK [service-ks-register : ironic | Granting user roles] ********************** 2025-02-10 09:41:53.534254 | orchestrator | Monday 10 February 2025 09:37:31 +0000 (0:00:03.762) 0:01:13.313 ******* 2025-02-10 09:41:53.534268 | orchestrator | changed: [testbed-node-0] => (item=ironic -> service -> admin) 2025-02-10 09:41:53.534282 | orchestrator | changed: [testbed-node-0] => (item=ironic-inspector -> service -> admin) 2025-02-10 09:41:53.534296 | orchestrator | changed: [testbed-node-0] => (item=ironic -> service -> service) 2025-02-10 09:41:53.534310 | orchestrator | changed: [testbed-node-0] => (item=ironic-inspector -> service -> service) 2025-02-10 09:41:53.534324 | orchestrator | 2025-02-10 09:41:53.534337 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-02-10 09:41:53.534351 | orchestrator | Monday 10 February 2025 09:37:47 +0000 (0:00:16.325) 0:01:29.639 ******* 2025-02-10 09:41:53.534365 | orchestrator | changed: [testbed-node-1] => (item=iscsi_tcp) 2025-02-10 09:41:53.534387 | orchestrator | changed: [testbed-node-0] => (item=iscsi_tcp) 2025-02-10 09:41:53.534401 | orchestrator | changed: [testbed-node-2] => (item=iscsi_tcp) 2025-02-10 09:41:53.534415 | orchestrator | 2025-02-10 09:41:53.534429 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-02-10 09:41:53.534443 | orchestrator | Monday 10 February 2025 09:37:48 +0000 (0:00:01.521) 0:01:31.160 ******* 2025-02-10 09:41:53.534456 | orchestrator | changed: [testbed-node-0] => (item=iscsi_tcp) 2025-02-10 09:41:53.534470 | orchestrator | changed: [testbed-node-1] => (item=iscsi_tcp) 2025-02-10 09:41:53.534484 | orchestrator | changed: [testbed-node-2] => (item=iscsi_tcp) 2025-02-10 09:41:53.534498 | orchestrator | 2025-02-10 09:41:53.534512 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-02-10 09:41:53.534526 | orchestrator | Monday 10 February 2025 09:37:52 +0000 (0:00:03.134) 0:01:34.295 ******* 2025-02-10 09:41:53.534540 | orchestrator | skipping: [testbed-node-0] => (item=iscsi_tcp)  2025-02-10 09:41:53.534564 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:53.534579 | orchestrator | skipping: [testbed-node-1] => (item=iscsi_tcp)  2025-02-10 09:41:53.534593 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:53.534607 | orchestrator | skipping: [testbed-node-2] => (item=iscsi_tcp)  2025-02-10 09:41:53.534620 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:53.534634 | orchestrator | 2025-02-10 09:41:53.534648 | orchestrator | TASK [ironic : Ensuring config directories exist] ****************************** 2025-02-10 09:41:53.534662 | orchestrator | Monday 10 February 2025 09:37:54 +0000 (0:00:02.028) 0:01:36.323 ******* 2025-02-10 09:41:53.534678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-api:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-10 09:41:53.534770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-api:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-10 09:41:53.534787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-api:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-10 09:41:53.534801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-conductor:24.1.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:53.534823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-conductor:24.1.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:53.534836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-conductor:24.1.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:53.534888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-inspector:12.1.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-10 09:41:53.534906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-inspector:12.1.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-10 09:41:53.534920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-inspector:12.1.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-10 09:41:53.534940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-02-10 09:41:53.534963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-02-10 09:41:53.535005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-02-10 09:41:53.535020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-02-10 09:41:53.535035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/dnsmasq:2.90.20241206', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:41:53.535048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-02-10 09:41:53.535068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-prometheus-exporter:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-02-10 09:41:53.535081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/dnsmasq:2.90.20241206', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:41:53.535116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-prometheus-exporter:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-02-10 09:41:53.535163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-02-10 09:41:53.535188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/dnsmasq:2.90.20241206', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:41:53.535202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-prometheus-exporter:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-02-10 09:41:53.535215 | orchestrator | 2025-02-10 09:41:53.535228 | orchestrator | TASK [ironic : Check if Ironic policies shall be overwritten] ****************** 2025-02-10 09:41:53.535241 | orchestrator | Monday 10 February 2025 09:37:58 +0000 (0:00:04.177) 0:01:40.501 ******* 2025-02-10 09:41:53.535253 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:53.535266 | orchestrator | 2025-02-10 09:41:53.535278 | orchestrator | TASK [ironic : Check if Ironic Inspector policies shall be overwritten] ******** 2025-02-10 09:41:53.535291 | orchestrator | Monday 10 February 2025 09:37:58 +0000 (0:00:00.230) 0:01:40.732 ******* 2025-02-10 09:41:53.535318 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:53.535330 | orchestrator | 2025-02-10 09:41:53.535343 | orchestrator | TASK [ironic : Set ironic policy file] ***************************************** 2025-02-10 09:41:53.535355 | orchestrator | Monday 10 February 2025 09:37:58 +0000 (0:00:00.199) 0:01:40.931 ******* 2025-02-10 09:41:53.535368 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:53.535380 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:53.535393 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:53.535405 | orchestrator | 2025-02-10 09:41:53.535417 | orchestrator | TASK [ironic : Set ironic-inspector policy file] ******************************* 2025-02-10 09:41:53.535429 | orchestrator | Monday 10 February 2025 09:38:00 +0000 (0:00:01.492) 0:01:42.424 ******* 2025-02-10 09:41:53.535441 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:53.535454 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:53.535467 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:53.535479 | orchestrator | 2025-02-10 09:41:53.535491 | orchestrator | TASK [ironic : include_tasks] ************************************************** 2025-02-10 09:41:53.535504 | orchestrator | Monday 10 February 2025 09:38:00 +0000 (0:00:00.670) 0:01:43.094 ******* 2025-02-10 09:41:53.535516 | orchestrator | included: /ansible/roles/ironic/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:41:53.535528 | orchestrator | 2025-02-10 09:41:53.535541 | orchestrator | TASK [service-cert-copy : ironic | Copying over extra CA certificates] ********* 2025-02-10 09:41:53.535553 | orchestrator | Monday 10 February 2025 09:38:01 +0000 (0:00:00.966) 0:01:44.061 ******* 2025-02-10 09:41:53.535566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-api:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-10 09:41:53.535609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-api:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-10 09:41:53.535625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-api:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-10 09:41:53.535655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-conductor:24.1.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:53.535669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-conductor:24.1.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:53.535682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-conductor:24.1.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:53.535724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-inspector:12.1.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-10 09:41:53.535740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-inspector:12.1.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-10 09:41:53.535770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-inspector:12.1.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-10 09:41:53.535785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-02-10 09:41:53.535798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-02-10 09:41:53.535838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-02-10 09:41:53.535853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-02-10 09:41:53.535866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-02-10 09:41:53.535895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-02-10 09:41:53.535909 | orchestrator | 2025-02-10 09:41:53.535922 | orchestrator | TASK [service-cert-copy : ironic | Copying over backend internal TLS certificate] *** 2025-02-10 09:41:53.535935 | orchestrator | Monday 10 February 2025 09:38:09 +0000 (0:00:07.281) 0:01:51.342 ******* 2025-02-10 09:41:53.535947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-api:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-10 09:41:53.535961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-conductor:24.1.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:41:53.536010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-inspector:12.1.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-10 09:41:53.536033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:41:53.536046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:41:53.536059 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:53.536072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-api:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-10 09:41:53.536102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-conductor:24.1.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:41:53.536156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-inspector:12.1.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-10 09:41:53.536179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:41:53.536193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:41:53.536206 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:53.536219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-api:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-10 09:41:53.536241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-conductor:24.1.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:41:53.536282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-inspector:12.1.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-10 09:41:53.536303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:41:53.536333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:41:53.536353 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:53.536374 | orchestrator | 2025-02-10 09:41:53.536394 | orchestrator | TASK [service-cert-copy : ironic | Copying over backend internal TLS key] ****** 2025-02-10 09:41:53.536413 | orchestrator | Monday 10 February 2025 09:38:11 +0000 (0:00:02.314) 0:01:53.658 ******* 2025-02-10 09:41:53.536426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-api:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-10 09:41:53.536449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-conductor:24.1.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:41:53.536463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-inspector:12.1.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-10 09:41:53.536516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:41:53.536531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:41:53.536544 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:53.536557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-api:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-10 09:41:53.536583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-conductor:24.1.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:41:53.536597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-inspector:12.1.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-10 09:41:53.536646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:41:53.536662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:41:53.536675 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:53.536688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-api:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-10 09:41:53.536710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-conductor:24.1.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:41:53.536724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-inspector:12.1.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-10 09:41:53.536764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:41:53.536785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:41:53.536798 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:53.536811 | orchestrator | 2025-02-10 09:41:53.536824 | orchestrator | TASK [ironic : Copying over config.json files for services] ******************** 2025-02-10 09:41:53.536836 | orchestrator | Monday 10 February 2025 09:38:13 +0000 (0:00:01.881) 0:01:55.539 ******* 2025-02-10 09:41:53.536858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-api:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-10 09:41:53.536872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-api:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-10 09:41:53.536885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-api:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-10 09:41:53.536933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-conductor:24.1.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:53.536949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-conductor:24.1.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:53.536970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-conductor:24.1.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:53.536984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-inspector:12.1.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-10 09:41:53.536997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-inspector:12.1.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-10 09:41:53.537050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-inspector:12.1.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-10 09:41:53.537075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-02-10 09:41:53.537146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-02-10 09:41:53.537160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-02-10 09:41:53.537173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/dnsmasq:2.90.20241206', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:41:53.537186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-prometheus-exporter:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-02-10 09:41:53.537206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-02-10 09:41:53.537252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/dnsmasq:2.90.20241206', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:41:53.537268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-prometheus-exporter:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-02-10 09:41:53.537281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-02-10 09:41:53.537306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-02-10 09:41:53.537319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/dnsmasq:2.90.20241206', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:41:53.537332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-prometheus-exporter:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-02-10 09:41:53.537352 | orchestrator | 2025-02-10 09:41:53.537365 | orchestrator | TASK [ironic : Copying over ironic.conf] *************************************** 2025-02-10 09:41:53.537377 | orchestrator | Monday 10 February 2025 09:38:21 +0000 (0:00:07.977) 0:02:03.517 ******* 2025-02-10 09:41:53.537418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-api:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-10 09:41:53.537433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-api:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-10 09:41:53.537452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-api:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-10 09:41:53.537463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-conductor:24.1.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:53.537479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-inspector:12.1.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-10 09:41:53.537513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:41:53.537525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:41:53.537546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/dnsmasq:2.90.20241206', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:41:53.537558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-prometheus-exporter:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-02-10 09:41:53.537568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-conductor:24.1.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:53.537584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-inspector:12.1.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-10 09:41:53.537599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:41:53.537609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:41:53.537626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/dnsmasq:2.90.20241206', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:41:53.537637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-conductor:24.1.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:53.537648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-prometheus-exporter:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-02-10 09:41:53.537664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-inspector:12.1.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-10 09:41:53.537675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:41:53.537699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:41:53.537711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/dnsmasq:2.90.20241206', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:41:53.537722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-prometheus-exporter:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-02-10 09:41:53.537732 | orchestrator | 2025-02-10 09:41:53.537743 | orchestrator | TASK [ironic : Copying over inspector.conf] ************************************ 2025-02-10 09:41:53.537753 | orchestrator | Monday 10 February 2025 09:38:29 +0000 (0:00:08.433) 0:02:11.950 ******* 2025-02-10 09:41:53.537763 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:41:53.537773 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:41:53.537784 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:41:53.537799 | orchestrator | 2025-02-10 09:41:53.537809 | orchestrator | TASK [ironic : Copying over dnsmasq.conf] ************************************** 2025-02-10 09:41:53.537825 | orchestrator | Monday 10 February 2025 09:38:40 +0000 (0:00:10.312) 0:02:22.263 ******* 2025-02-10 09:41:53.537835 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/ironic/templates/ironic-dnsmasq.conf.j2)  2025-02-10 09:41:53.537845 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:53.537855 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/ironic/templates/ironic-dnsmasq.conf.j2)  2025-02-10 09:41:53.537865 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:53.537875 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/ironic/templates/ironic-dnsmasq.conf.j2)  2025-02-10 09:41:53.537885 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:53.537895 | orchestrator | 2025-02-10 09:41:53.537905 | orchestrator | TASK [ironic : Copying pxelinux.cfg default] *********************************** 2025-02-10 09:41:53.537915 | orchestrator | Monday 10 February 2025 09:38:43 +0000 (0:00:03.106) 0:02:25.369 ******* 2025-02-10 09:41:53.537925 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/ironic/templates/pxelinux.default.j2)  2025-02-10 09:41:53.537935 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:53.537945 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/ironic/templates/pxelinux.default.j2)  2025-02-10 09:41:53.537955 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:53.537965 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/ironic/templates/pxelinux.default.j2)  2025-02-10 09:41:53.537975 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:53.537986 | orchestrator | 2025-02-10 09:41:53.537996 | orchestrator | TASK [ironic : Copying ironic-agent kernel and initramfs (PXE)] **************** 2025-02-10 09:41:53.538006 | orchestrator | Monday 10 February 2025 09:38:46 +0000 (0:00:03.268) 0:02:28.638 ******* 2025-02-10 09:41:53.538041 | orchestrator | skipping: [testbed-node-0] => (item=ironic-agent.kernel)  2025-02-10 09:41:53.538057 | orchestrator | skipping: [testbed-node-1] => (item=ironic-agent.kernel)  2025-02-10 09:41:53.538067 | orchestrator | skipping: [testbed-node-2] => (item=ironic-agent.kernel)  2025-02-10 09:41:53.538077 | orchestrator | skipping: [testbed-node-0] => (item=ironic-agent.initramfs)  2025-02-10 09:41:53.538105 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:53.538116 | orchestrator | skipping: [testbed-node-2] => (item=ironic-agent.initramfs)  2025-02-10 09:41:53.538126 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:53.538136 | orchestrator | skipping: [testbed-node-1] => (item=ironic-agent.initramfs)  2025-02-10 09:41:53.538146 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:53.538156 | orchestrator | 2025-02-10 09:41:53.538166 | orchestrator | TASK [ironic : Copying ironic-agent kernel and initramfs (iPXE)] *************** 2025-02-10 09:41:53.538176 | orchestrator | Monday 10 February 2025 09:38:53 +0000 (0:00:07.548) 0:02:36.187 ******* 2025-02-10 09:41:53.538186 | orchestrator | changed: [testbed-node-1] => (item=ironic-agent.kernel) 2025-02-10 09:41:53.538196 | orchestrator | changed: [testbed-node-0] => (item=ironic-agent.kernel) 2025-02-10 09:41:53.538212 | orchestrator | changed: [testbed-node-2] => (item=ironic-agent.kernel) 2025-02-10 09:41:53.538222 | orchestrator | changed: [testbed-node-0] => (item=ironic-agent.initramfs) 2025-02-10 09:41:53.538232 | orchestrator | changed: [testbed-node-1] => (item=ironic-agent.initramfs) 2025-02-10 09:41:53.538242 | orchestrator | changed: [testbed-node-2] => (item=ironic-agent.initramfs) 2025-02-10 09:41:53.538252 | orchestrator | 2025-02-10 09:41:53.538262 | orchestrator | TASK [ironic : Copying inspector.ipxe] ***************************************** 2025-02-10 09:41:53.538272 | orchestrator | Monday 10 February 2025 09:39:15 +0000 (0:00:21.202) 0:02:57.389 ******* 2025-02-10 09:41:53.538281 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/ironic/templates/inspector.ipxe.j2) 2025-02-10 09:41:53.538291 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/ironic/templates/inspector.ipxe.j2) 2025-02-10 09:41:53.538301 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/ironic/templates/inspector.ipxe.j2) 2025-02-10 09:41:53.538317 | orchestrator | 2025-02-10 09:41:53.538327 | orchestrator | TASK [ironic : Copying ironic-http-httpd.conf] ********************************* 2025-02-10 09:41:53.538337 | orchestrator | Monday 10 February 2025 09:39:19 +0000 (0:00:04.474) 0:03:01.864 ******* 2025-02-10 09:41:53.538347 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/ironic/templates/ironic-http-httpd.conf.j2) 2025-02-10 09:41:53.538357 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/ironic/templates/ironic-http-httpd.conf.j2) 2025-02-10 09:41:53.538366 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/ironic/templates/ironic-http-httpd.conf.j2) 2025-02-10 09:41:53.538376 | orchestrator | 2025-02-10 09:41:53.538386 | orchestrator | TASK [ironic : Copying over ironic-prometheus-exporter-wsgi.conf] ************** 2025-02-10 09:41:53.538396 | orchestrator | Monday 10 February 2025 09:39:23 +0000 (0:00:04.052) 0:03:05.916 ******* 2025-02-10 09:41:53.538406 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/ironic/templates/ironic-prometheus-exporter-wsgi.conf.j2)  2025-02-10 09:41:53.538417 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:53.538427 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/ironic/templates/ironic-prometheus-exporter-wsgi.conf.j2)  2025-02-10 09:41:53.538437 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:53.538447 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/ironic/templates/ironic-prometheus-exporter-wsgi.conf.j2)  2025-02-10 09:41:53.538457 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:53.538467 | orchestrator | 2025-02-10 09:41:53.538477 | orchestrator | TASK [ironic : Copying over existing Ironic policy file] *********************** 2025-02-10 09:41:53.538487 | orchestrator | Monday 10 February 2025 09:39:26 +0000 (0:00:02.903) 0:03:08.820 ******* 2025-02-10 09:41:53.538497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-api:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-10 09:41:53.538508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-conductor:24.1.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:41:53.538525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-inspector:12.1.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-10 09:41:53.538550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:41:53.538561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:41:53.538572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/dnsmasq:2.90.20241206', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:41:53.538582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-prometheus-exporter:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-02-10 09:41:53.538593 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:53.538603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-api:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-10 09:41:53.538629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-conductor:24.1.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:41:53.538645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-inspector:12.1.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-10 09:41:53.538656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:41:53.538667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:41:53.538677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/dnsmasq:2.90.20241206', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:41:53.538687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-prometheus-exporter:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-02-10 09:41:53.538698 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:53.538725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-api:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-10 09:41:53.538746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-conductor:24.1.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:41:53.538757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-inspector:12.1.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-10 09:41:53.538768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:41:53.538778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:41:53.538789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/dnsmasq:2.90.20241206', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:41:53.538818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-prometheus-exporter:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-02-10 09:41:53.538829 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:53.538839 | orchestrator | 2025-02-10 09:41:53.538850 | orchestrator | TASK [ironic : Copying over existing Ironic Inspector policy file] ************* 2025-02-10 09:41:53.538860 | orchestrator | Monday 10 February 2025 09:39:29 +0000 (0:00:03.308) 0:03:12.128 ******* 2025-02-10 09:41:53.538870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-api:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-10 09:41:53.538881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-conductor:24.1.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:41:53.538891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-inspector:12.1.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-10 09:41:53.538909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:41:53.538935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:41:53.538946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/dnsmasq:2.90.20241206', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:41:53.538956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-prometheus-exporter:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-02-10 09:41:53.538966 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:53.538977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-api:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-10 09:41:53.538987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-conductor:24.1.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:41:53.539005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-inspector:12.1.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-10 09:41:53.539026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:41:53.539037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:41:53.539048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/dnsmasq:2.90.20241206', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:41:53.539058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-prometheus-exporter:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-02-10 09:41:53.539069 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:53.539079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-api:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-10 09:41:53.539119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-conductor:24.1.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:41:53.539137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-inspector:12.1.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-10 09:41:53.539149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:41:53.539159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:41:53.539170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/dnsmasq:2.90.20241206', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:41:53.539180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-prometheus-exporter:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-02-10 09:41:53.539198 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:53.539209 | orchestrator | 2025-02-10 09:41:53.539219 | orchestrator | TASK [ironic : Copying over ironic-api-wsgi.conf] ****************************** 2025-02-10 09:41:53.539229 | orchestrator | Monday 10 February 2025 09:39:32 +0000 (0:00:02.939) 0:03:15.067 ******* 2025-02-10 09:41:53.539239 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:41:53.539249 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:41:53.539258 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:41:53.539268 | orchestrator | 2025-02-10 09:41:53.539278 | orchestrator | TASK [ironic : Check ironic containers] **************************************** 2025-02-10 09:41:53.539288 | orchestrator | Monday 10 February 2025 09:39:37 +0000 (0:00:04.247) 0:03:19.315 ******* 2025-02-10 09:41:53.539310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-api:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-10 09:41:53.539322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-api:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-10 09:41:53.539333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-api:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-10 09:41:53.539343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-conductor:24.1.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:53.539371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-conductor:24.1.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:53.539389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-conductor:24.1.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:53.539399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-inspector:12.1.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-10 09:41:53.539410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-inspector:12.1.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-10 09:41:53.539428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-inspector:12.1.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-10 09:41:53.539444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-02-10 09:41:53.539459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-02-10 09:41:53.539470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-02-10 09:41:53.539481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-02-10 09:41:53.539688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/dnsmasq:2.90.20241206', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:41:53.539702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-prometheus-exporter:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-02-10 09:41:53.539718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-02-10 09:41:53.539729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/dnsmasq:2.90.20241206', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:41:53.539745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-prometheus-exporter:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-02-10 09:41:53.539756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-02-10 09:41:53.539766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/dnsmasq:2.90.20241206', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:41:53.539777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-prometheus-exporter:24.1.4.20241206', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-02-10 09:41:53.539787 | orchestrator | 2025-02-10 09:41:53.539797 | orchestrator | TASK [ironic : include_tasks] ************************************************** 2025-02-10 09:41:53.539812 | orchestrator | Monday 10 February 2025 09:39:44 +0000 (0:00:07.195) 0:03:26.510 ******* 2025-02-10 09:41:53.539822 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:53.539833 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:53.539843 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:53.539853 | orchestrator | 2025-02-10 09:41:53.539863 | orchestrator | TASK [ironic : Creating Ironic database] *************************************** 2025-02-10 09:41:53.539873 | orchestrator | Monday 10 February 2025 09:39:44 +0000 (0:00:00.473) 0:03:26.984 ******* 2025-02-10 09:41:53.539883 | orchestrator | changed: [testbed-node-0] => (item={'database_name': 'ironic', 'group': 'ironic-api'}) 2025-02-10 09:41:53.539893 | orchestrator | changed: [testbed-node-0] => (item={'database_name': 'ironic_inspector', 'group': 'ironic-inspector'}) 2025-02-10 09:41:53.539903 | orchestrator | 2025-02-10 09:41:53.539913 | orchestrator | TASK [ironic : Creating Ironic database user and setting permissions] ********** 2025-02-10 09:41:53.539923 | orchestrator | Monday 10 February 2025 09:39:50 +0000 (0:00:05.633) 0:03:32.618 ******* 2025-02-10 09:41:53.539933 | orchestrator | changed: [testbed-node-0] => (item=ironic) 2025-02-10 09:41:53.539944 | orchestrator | changed: [testbed-node-0] => (item=ironic_inspector) 2025-02-10 09:41:53.539954 | orchestrator | 2025-02-10 09:41:53.539963 | orchestrator | TASK [ironic : Running Ironic bootstrap container] ***************************** 2025-02-10 09:41:53.539973 | orchestrator | Monday 10 February 2025 09:39:56 +0000 (0:00:06.114) 0:03:38.732 ******* 2025-02-10 09:41:53.539983 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:41:53.539993 | orchestrator | 2025-02-10 09:41:53.540003 | orchestrator | TASK [ironic : Running Ironic Inspector bootstrap container] ******************* 2025-02-10 09:41:53.540013 | orchestrator | Monday 10 February 2025 09:40:14 +0000 (0:00:18.330) 0:03:57.063 ******* 2025-02-10 09:41:53.540023 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:41:53.540033 | orchestrator | 2025-02-10 09:41:53.540043 | orchestrator | TASK [ironic : Running ironic-tftp bootstrap container] ************************ 2025-02-10 09:41:53.540053 | orchestrator | Monday 10 February 2025 09:40:25 +0000 (0:00:10.806) 0:04:07.869 ******* 2025-02-10 09:41:53.540063 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:41:53.540073 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:41:53.540129 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:41:53.540141 | orchestrator | 2025-02-10 09:41:53.540152 | orchestrator | TASK [ironic : Flush handlers] ************************************************* 2025-02-10 09:41:53.540184 | orchestrator | Monday 10 February 2025 09:40:38 +0000 (0:00:12.641) 0:04:20.511 ******* 2025-02-10 09:41:53.540194 | orchestrator | 2025-02-10 09:41:53.540203 | orchestrator | TASK [ironic : Flush handlers] ************************************************* 2025-02-10 09:41:53.540211 | orchestrator | Monday 10 February 2025 09:40:38 +0000 (0:00:00.058) 0:04:20.569 ******* 2025-02-10 09:41:53.540220 | orchestrator | 2025-02-10 09:41:53.540228 | orchestrator | TASK [ironic : Flush handlers] ************************************************* 2025-02-10 09:41:53.540236 | orchestrator | Monday 10 February 2025 09:40:38 +0000 (0:00:00.156) 0:04:20.725 ******* 2025-02-10 09:41:53.540245 | orchestrator | 2025-02-10 09:41:53.540253 | orchestrator | RUNNING HANDLER [ironic : Restart ironic-conductor container] ****************** 2025-02-10 09:41:53.540262 | orchestrator | Monday 10 February 2025 09:40:38 +0000 (0:00:00.059) 0:04:20.785 ******* 2025-02-10 09:41:53.540271 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:41:53.540279 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:41:53.540292 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:41:53.540300 | orchestrator | 2025-02-10 09:41:53.540309 | orchestrator | RUNNING HANDLER [ironic : Restart ironic-api container] ************************ 2025-02-10 09:41:53.540317 | orchestrator | Monday 10 February 2025 09:40:56 +0000 (0:00:17.910) 0:04:38.695 ******* 2025-02-10 09:41:53.540326 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:41:53.540334 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:41:53.540343 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:41:53.540351 | orchestrator | 2025-02-10 09:41:53.540360 | orchestrator | RUNNING HANDLER [ironic : Restart ironic-inspector container] ****************** 2025-02-10 09:41:53.540374 | orchestrator | Monday 10 February 2025 09:41:05 +0000 (0:00:09.321) 0:04:48.017 ******* 2025-02-10 09:41:53.540383 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:41:53.540391 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:41:53.540400 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:41:53.540409 | orchestrator | 2025-02-10 09:41:53.540417 | orchestrator | RUNNING HANDLER [ironic : Restart ironic-tftp container] *********************** 2025-02-10 09:41:53.540425 | orchestrator | Monday 10 February 2025 09:41:21 +0000 (0:00:15.353) 0:05:03.370 ******* 2025-02-10 09:41:53.540434 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:41:53.540442 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:41:53.540451 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:41:53.540459 | orchestrator | 2025-02-10 09:41:53.540468 | orchestrator | RUNNING HANDLER [ironic : Restart ironic-http container] *********************** 2025-02-10 09:41:53.540476 | orchestrator | Monday 10 February 2025 09:41:30 +0000 (0:00:09.102) 0:05:12.473 ******* 2025-02-10 09:41:53.540484 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:41:53.540493 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:41:53.540501 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:41:53.540510 | orchestrator | 2025-02-10 09:41:53.540518 | orchestrator | TASK [ironic : Flush and delete ironic-inspector iptables chain] *************** 2025-02-10 09:41:53.540527 | orchestrator | Monday 10 February 2025 09:41:47 +0000 (0:00:16.807) 0:05:29.280 ******* 2025-02-10 09:41:53.540535 | orchestrator | ok: [testbed-node-0] => (item=flush) 2025-02-10 09:41:53.540544 | orchestrator | ok: [testbed-node-1] => (item=flush) 2025-02-10 09:41:53.540552 | orchestrator | ok: [testbed-node-2] => (item=flush) 2025-02-10 09:41:53.540561 | orchestrator | ok: [testbed-node-0] => (item=delete-chain) 2025-02-10 09:41:53.540570 | orchestrator | ok: [testbed-node-1] => (item=delete-chain) 2025-02-10 09:41:53.540587 | orchestrator | ok: [testbed-node-2] => (item=delete-chain) 2025-02-10 09:41:53.540596 | orchestrator | 2025-02-10 09:41:53.540604 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:41:53.540613 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:41:53.540624 | orchestrator | testbed-node-0 : ok=33  changed=26  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-02-10 09:41:53.540637 | orchestrator | testbed-node-1 : ok=23  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-02-10 09:41:53.540647 | orchestrator | testbed-node-2 : ok=23  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-02-10 09:41:53.540655 | orchestrator | 2025-02-10 09:41:53.540664 | orchestrator | 2025-02-10 09:41:53.540672 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:41:53.540681 | orchestrator | Monday 10 February 2025 09:41:50 +0000 (0:00:03.711) 0:05:32.992 ******* 2025-02-10 09:41:53.540689 | orchestrator | =============================================================================== 2025-02-10 09:41:53.540697 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 29.77s 2025-02-10 09:41:53.540706 | orchestrator | ironic : Copying ironic-agent kernel and initramfs (iPXE) -------------- 21.20s 2025-02-10 09:41:53.540714 | orchestrator | ironic : Running Ironic bootstrap container ---------------------------- 18.33s 2025-02-10 09:41:53.540722 | orchestrator | ironic : Restart ironic-conductor container ---------------------------- 17.91s 2025-02-10 09:41:53.540731 | orchestrator | ironic : Restart ironic-http container --------------------------------- 16.81s 2025-02-10 09:41:53.540739 | orchestrator | service-ks-register : ironic | Granting user roles --------------------- 16.33s 2025-02-10 09:41:53.540748 | orchestrator | ironic : Restart ironic-inspector container ---------------------------- 15.35s 2025-02-10 09:41:53.540756 | orchestrator | service-ks-register : ironic | Creating endpoints ---------------------- 13.44s 2025-02-10 09:41:53.540769 | orchestrator | ironic : Running ironic-tftp bootstrap container ----------------------- 12.64s 2025-02-10 09:41:53.540778 | orchestrator | ironic : Running Ironic Inspector bootstrap container ------------------ 10.81s 2025-02-10 09:41:53.540786 | orchestrator | ironic : Copying over inspector.conf ----------------------------------- 10.31s 2025-02-10 09:41:53.540801 | orchestrator | ironic : Restart ironic-api container ----------------------------------- 9.32s 2025-02-10 09:41:53.540810 | orchestrator | ironic : Restart ironic-tftp container ---------------------------------- 9.10s 2025-02-10 09:41:53.540818 | orchestrator | ironic : Copying over ironic.conf --------------------------------------- 8.43s 2025-02-10 09:41:53.540827 | orchestrator | service-ks-register : ironic | Creating users --------------------------- 8.11s 2025-02-10 09:41:53.540835 | orchestrator | ironic : Copying over config.json files for services -------------------- 7.98s 2025-02-10 09:41:53.540844 | orchestrator | ironic : Copying ironic-agent kernel and initramfs (PXE) ---------------- 7.55s 2025-02-10 09:41:53.540852 | orchestrator | service-cert-copy : ironic | Copying over extra CA certificates --------- 7.28s 2025-02-10 09:41:53.540861 | orchestrator | ironic : Check ironic containers ---------------------------------------- 7.20s 2025-02-10 09:41:53.540872 | orchestrator | service-ks-register : ironic | Creating services ------------------------ 7.08s 2025-02-10 09:41:56.579645 | orchestrator | 2025-02-10 09:41:53 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:41:56.579783 | orchestrator | 2025-02-10 09:41:53 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:41:56.579804 | orchestrator | 2025-02-10 09:41:53 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:56.579839 | orchestrator | 2025-02-10 09:41:56 | INFO  | Task f640ac78-495d-4f68-b1c0-1cfe4252168c is in state STARTED 2025-02-10 09:41:56.580635 | orchestrator | 2025-02-10 09:41:56 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:41:56.580673 | orchestrator | 2025-02-10 09:41:56 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:41:56.581493 | orchestrator | 2025-02-10 09:41:56 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:41:59.629850 | orchestrator | 2025-02-10 09:41:56 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:59.630246 | orchestrator | 2025-02-10 09:41:59 | INFO  | Task f640ac78-495d-4f68-b1c0-1cfe4252168c is in state STARTED 2025-02-10 09:41:59.632593 | orchestrator | 2025-02-10 09:41:59 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:41:59.632636 | orchestrator | 2025-02-10 09:41:59 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:41:59.635539 | orchestrator | 2025-02-10 09:41:59 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:42:02.688373 | orchestrator | 2025-02-10 09:41:59 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:42:02.688534 | orchestrator | 2025-02-10 09:42:02 | INFO  | Task f640ac78-495d-4f68-b1c0-1cfe4252168c is in state STARTED 2025-02-10 09:42:02.688695 | orchestrator | 2025-02-10 09:42:02 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:42:02.689858 | orchestrator | 2025-02-10 09:42:02 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:42:02.691008 | orchestrator | 2025-02-10 09:42:02 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:42:02.691324 | orchestrator | 2025-02-10 09:42:02 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:42:05.748569 | orchestrator | 2025-02-10 09:42:05 | INFO  | Task f640ac78-495d-4f68-b1c0-1cfe4252168c is in state STARTED 2025-02-10 09:42:05.750806 | orchestrator | 2025-02-10 09:42:05 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:42:05.752488 | orchestrator | 2025-02-10 09:42:05 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:42:05.755477 | orchestrator | 2025-02-10 09:42:05 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:42:08.807552 | orchestrator | 2025-02-10 09:42:05 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:42:08.807713 | orchestrator | 2025-02-10 09:42:08 | INFO  | Task f640ac78-495d-4f68-b1c0-1cfe4252168c is in state STARTED 2025-02-10 09:42:11.847699 | orchestrator | 2025-02-10 09:42:08 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:42:11.847848 | orchestrator | 2025-02-10 09:42:08 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:42:11.847870 | orchestrator | 2025-02-10 09:42:08 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:42:11.847888 | orchestrator | 2025-02-10 09:42:08 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:42:11.847925 | orchestrator | 2025-02-10 09:42:11 | INFO  | Task f640ac78-495d-4f68-b1c0-1cfe4252168c is in state STARTED 2025-02-10 09:42:11.848358 | orchestrator | 2025-02-10 09:42:11 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:42:11.848399 | orchestrator | 2025-02-10 09:42:11 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:42:11.849079 | orchestrator | 2025-02-10 09:42:11 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:42:11.849642 | orchestrator | 2025-02-10 09:42:11 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:42:14.891381 | orchestrator | 2025-02-10 09:42:14 | INFO  | Task f640ac78-495d-4f68-b1c0-1cfe4252168c is in state STARTED 2025-02-10 09:42:14.893113 | orchestrator | 2025-02-10 09:42:14 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:42:14.893150 | orchestrator | 2025-02-10 09:42:14 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:42:14.897271 | orchestrator | 2025-02-10 09:42:14 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:42:17.924654 | orchestrator | 2025-02-10 09:42:14 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:42:17.924815 | orchestrator | 2025-02-10 09:42:17 | INFO  | Task f640ac78-495d-4f68-b1c0-1cfe4252168c is in state STARTED 2025-02-10 09:42:17.925225 | orchestrator | 2025-02-10 09:42:17 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:42:17.926075 | orchestrator | 2025-02-10 09:42:17 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:42:17.927195 | orchestrator | 2025-02-10 09:42:17 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:42:20.971522 | orchestrator | 2025-02-10 09:42:17 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:42:20.971800 | orchestrator | 2025-02-10 09:42:20 | INFO  | Task f640ac78-495d-4f68-b1c0-1cfe4252168c is in state STARTED 2025-02-10 09:42:20.972875 | orchestrator | 2025-02-10 09:42:20 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:42:20.972942 | orchestrator | 2025-02-10 09:42:20 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:42:20.974456 | orchestrator | 2025-02-10 09:42:20 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:42:24.041821 | orchestrator | 2025-02-10 09:42:20 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:42:24.041960 | orchestrator | 2025-02-10 09:42:24 | INFO  | Task f640ac78-495d-4f68-b1c0-1cfe4252168c is in state STARTED 2025-02-10 09:42:24.047634 | orchestrator | 2025-02-10 09:42:24 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:42:24.047684 | orchestrator | 2025-02-10 09:42:24 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:42:24.047703 | orchestrator | 2025-02-10 09:42:24 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:42:27.081010 | orchestrator | 2025-02-10 09:42:24 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:42:27.081221 | orchestrator | 2025-02-10 09:42:27 | INFO  | Task f640ac78-495d-4f68-b1c0-1cfe4252168c is in state STARTED 2025-02-10 09:42:27.083678 | orchestrator | 2025-02-10 09:42:27 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:42:27.084259 | orchestrator | 2025-02-10 09:42:27 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:42:27.085008 | orchestrator | 2025-02-10 09:42:27 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:42:30.139273 | orchestrator | 2025-02-10 09:42:27 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:42:30.139443 | orchestrator | 2025-02-10 09:42:30 | INFO  | Task f640ac78-495d-4f68-b1c0-1cfe4252168c is in state STARTED 2025-02-10 09:42:30.139850 | orchestrator | 2025-02-10 09:42:30 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:42:30.139885 | orchestrator | 2025-02-10 09:42:30 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state STARTED 2025-02-10 09:42:30.139907 | orchestrator | 2025-02-10 09:42:30 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:42:33.175584 | orchestrator | 2025-02-10 09:42:30 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:42:33.175740 | orchestrator | 2025-02-10 09:42:33 | INFO  | Task f640ac78-495d-4f68-b1c0-1cfe4252168c is in state STARTED 2025-02-10 09:42:33.177684 | orchestrator | 2025-02-10 09:42:33 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:42:33.178312 | orchestrator | 2025-02-10 09:42:33 | INFO  | Task 699f35ff-6591-47c1-b2df-e7941a1230d6 is in state SUCCESS 2025-02-10 09:42:33.180421 | orchestrator | 2025-02-10 09:42:33.180469 | orchestrator | 2025-02-10 09:42:33.180485 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:42:33.180500 | orchestrator | 2025-02-10 09:42:33.180515 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:42:33.180529 | orchestrator | Monday 10 February 2025 09:40:17 +0000 (0:00:00.858) 0:00:00.858 ******* 2025-02-10 09:42:33.180543 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:42:33.180559 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:42:33.180573 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:42:33.180587 | orchestrator | 2025-02-10 09:42:33.180601 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:42:33.180615 | orchestrator | Monday 10 February 2025 09:40:18 +0000 (0:00:00.631) 0:00:01.490 ******* 2025-02-10 09:42:33.180629 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-02-10 09:42:33.180643 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-02-10 09:42:33.180658 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-02-10 09:42:33.180671 | orchestrator | 2025-02-10 09:42:33.180686 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-02-10 09:42:33.180700 | orchestrator | 2025-02-10 09:42:33.180714 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-02-10 09:42:33.180762 | orchestrator | Monday 10 February 2025 09:40:18 +0000 (0:00:00.378) 0:00:01.869 ******* 2025-02-10 09:42:33.181017 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:42:33.181041 | orchestrator | 2025-02-10 09:42:33.181055 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-02-10 09:42:33.181069 | orchestrator | Monday 10 February 2025 09:40:19 +0000 (0:00:00.762) 0:00:02.631 ******* 2025-02-10 09:42:33.181084 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-02-10 09:42:33.181126 | orchestrator | 2025-02-10 09:42:33.181152 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-02-10 09:42:33.181176 | orchestrator | Monday 10 February 2025 09:40:22 +0000 (0:00:03.503) 0:00:06.134 ******* 2025-02-10 09:42:33.181196 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-02-10 09:42:33.181210 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-02-10 09:42:33.181224 | orchestrator | 2025-02-10 09:42:33.181238 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-02-10 09:42:33.181252 | orchestrator | Monday 10 February 2025 09:40:31 +0000 (0:00:08.025) 0:00:14.159 ******* 2025-02-10 09:42:33.181266 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-02-10 09:42:33.181280 | orchestrator | 2025-02-10 09:42:33.181294 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-02-10 09:42:33.181308 | orchestrator | Monday 10 February 2025 09:40:34 +0000 (0:00:03.335) 0:00:17.495 ******* 2025-02-10 09:42:33.181322 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-02-10 09:42:33.181336 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-02-10 09:42:33.181350 | orchestrator | 2025-02-10 09:42:33.181363 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-02-10 09:42:33.181377 | orchestrator | Monday 10 February 2025 09:40:37 +0000 (0:00:03.017) 0:00:20.513 ******* 2025-02-10 09:42:33.181391 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-02-10 09:42:33.181405 | orchestrator | 2025-02-10 09:42:33.181419 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-02-10 09:42:33.181433 | orchestrator | Monday 10 February 2025 09:40:40 +0000 (0:00:03.038) 0:00:23.551 ******* 2025-02-10 09:42:33.181447 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-02-10 09:42:33.181460 | orchestrator | 2025-02-10 09:42:33.181474 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-02-10 09:42:33.181488 | orchestrator | Monday 10 February 2025 09:40:44 +0000 (0:00:03.607) 0:00:27.159 ******* 2025-02-10 09:42:33.181502 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:42:33.181516 | orchestrator | 2025-02-10 09:42:33.181530 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-02-10 09:42:33.181561 | orchestrator | Monday 10 February 2025 09:40:47 +0000 (0:00:03.110) 0:00:30.269 ******* 2025-02-10 09:42:33.181575 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:42:33.181589 | orchestrator | 2025-02-10 09:42:33.181603 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-02-10 09:42:33.181617 | orchestrator | Monday 10 February 2025 09:40:51 +0000 (0:00:04.595) 0:00:34.864 ******* 2025-02-10 09:42:33.181631 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:42:33.181647 | orchestrator | 2025-02-10 09:42:33.181662 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-02-10 09:42:33.181678 | orchestrator | Monday 10 February 2025 09:40:55 +0000 (0:00:04.102) 0:00:38.967 ******* 2025-02-10 09:42:33.181710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:42:33.181761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:42:33.181779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:42:33.181795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:42:33.181812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:42:33.181845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:42:33.181861 | orchestrator | 2025-02-10 09:42:33.181875 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-02-10 09:42:33.181889 | orchestrator | Monday 10 February 2025 09:40:57 +0000 (0:00:02.157) 0:00:41.124 ******* 2025-02-10 09:42:33.181903 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:42:33.181917 | orchestrator | 2025-02-10 09:42:33.181931 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-02-10 09:42:33.181945 | orchestrator | Monday 10 February 2025 09:40:58 +0000 (0:00:00.120) 0:00:41.245 ******* 2025-02-10 09:42:33.181958 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:42:33.181977 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:42:33.181992 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:42:33.182007 | orchestrator | 2025-02-10 09:42:33.182083 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-02-10 09:42:33.182130 | orchestrator | Monday 10 February 2025 09:40:58 +0000 (0:00:00.421) 0:00:41.666 ******* 2025-02-10 09:42:33.182145 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-10 09:42:33.182159 | orchestrator | 2025-02-10 09:42:33.182173 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-02-10 09:42:33.182187 | orchestrator | Monday 10 February 2025 09:40:59 +0000 (0:00:00.624) 0:00:42.291 ******* 2025-02-10 09:42:33.182201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:42:33.182216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:42:33.182231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:42:33.182266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:42:33.182282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:42:33.182297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:42:33.182311 | orchestrator | 2025-02-10 09:42:33.182325 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-02-10 09:42:33.182339 | orchestrator | Monday 10 February 2025 09:41:02 +0000 (0:00:03.530) 0:00:45.821 ******* 2025-02-10 09:42:33.182353 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:42:33.182368 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:42:33.182381 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:42:33.182395 | orchestrator | 2025-02-10 09:42:33.182409 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-02-10 09:42:33.182423 | orchestrator | Monday 10 February 2025 09:41:03 +0000 (0:00:00.392) 0:00:46.214 ******* 2025-02-10 09:42:33.182437 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:42:33.182451 | orchestrator | 2025-02-10 09:42:33.182472 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-02-10 09:42:33.182486 | orchestrator | Monday 10 February 2025 09:41:04 +0000 (0:00:01.105) 0:00:47.320 ******* 2025-02-10 09:42:33.182505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:42:33.182542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:42:33.182567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:42:33.182589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:42:33.182604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:42:33.182627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:42:33.182642 | orchestrator | 2025-02-10 09:42:33.182656 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-02-10 09:42:33.182671 | orchestrator | Monday 10 February 2025 09:41:07 +0000 (0:00:03.350) 0:00:50.670 ******* 2025-02-10 09:42:33.182694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-02-10 09:42:33.182709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:42:33.182724 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:42:33.182738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-02-10 09:42:33.182760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:42:33.182774 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:42:33.182789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-02-10 09:42:33.182810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:42:33.182825 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:42:33.182839 | orchestrator | 2025-02-10 09:42:33.182853 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-02-10 09:42:33.182866 | orchestrator | Monday 10 February 2025 09:41:09 +0000 (0:00:02.461) 0:00:53.131 ******* 2025-02-10 09:42:33.182881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-02-10 09:42:33.182903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:42:33.182917 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:42:33.182932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-02-10 09:42:33.182954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:42:33.182969 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:42:33.182983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-02-10 09:42:33.182998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:42:33.183019 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:42:33.183034 | orchestrator | 2025-02-10 09:42:33.183047 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-02-10 09:42:33.183061 | orchestrator | Monday 10 February 2025 09:41:13 +0000 (0:00:03.356) 0:00:56.488 ******* 2025-02-10 09:42:33.183076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:42:33.183091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:42:33.183192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:42:33.183209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:42:33.183232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:42:33.183246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:42:33.183260 | orchestrator | 2025-02-10 09:42:33.183285 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-02-10 09:42:33.183301 | orchestrator | Monday 10 February 2025 09:41:17 +0000 (0:00:03.995) 0:01:00.484 ******* 2025-02-10 09:42:33.183323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:42:33.183346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:42:33.183362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:42:33.183384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:42:33.183399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:42:33.183419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:42:33.183434 | orchestrator | 2025-02-10 09:42:33.183446 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-02-10 09:42:33.183459 | orchestrator | Monday 10 February 2025 09:41:29 +0000 (0:00:12.437) 0:01:12.921 ******* 2025-02-10 09:42:33.183472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-02-10 09:42:33.183491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:42:33.183503 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:42:33.183516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-02-10 09:42:33.183529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:42:33.183542 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:42:33.183561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-02-10 09:42:33.183575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:42:33.183595 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:42:33.183615 | orchestrator | 2025-02-10 09:42:33.183628 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-02-10 09:42:33.183640 | orchestrator | Monday 10 February 2025 09:41:32 +0000 (0:00:02.431) 0:01:15.352 ******* 2025-02-10 09:42:33.183653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:42:33.183666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:42:33.183679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:42:33.183698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:42:33.183722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:42:33.183735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:42:33.183748 | orchestrator | 2025-02-10 09:42:33.183760 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-02-10 09:42:33.183780 | orchestrator | Monday 10 February 2025 09:41:37 +0000 (0:00:05.508) 0:01:20.861 ******* 2025-02-10 09:42:33.183793 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:42:33.183805 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:42:33.183817 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:42:33.183829 | orchestrator | 2025-02-10 09:42:33.183842 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-02-10 09:42:33.183854 | orchestrator | Monday 10 February 2025 09:41:38 +0000 (0:00:01.024) 0:01:21.885 ******* 2025-02-10 09:42:33.183866 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:42:33.183879 | orchestrator | 2025-02-10 09:42:33.183891 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-02-10 09:42:33.183903 | orchestrator | Monday 10 February 2025 09:41:43 +0000 (0:00:04.356) 0:01:26.244 ******* 2025-02-10 09:42:33.183915 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:42:33.183927 | orchestrator | 2025-02-10 09:42:33.183940 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-02-10 09:42:33.183952 | orchestrator | Monday 10 February 2025 09:41:45 +0000 (0:00:02.623) 0:01:28.868 ******* 2025-02-10 09:42:33.183964 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:42:33.183976 | orchestrator | 2025-02-10 09:42:33.183988 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-02-10 09:42:33.184000 | orchestrator | Monday 10 February 2025 09:42:04 +0000 (0:00:19.078) 0:01:47.946 ******* 2025-02-10 09:42:33.184012 | orchestrator | 2025-02-10 09:42:33.184024 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-02-10 09:42:33.184036 | orchestrator | Monday 10 February 2025 09:42:04 +0000 (0:00:00.176) 0:01:48.123 ******* 2025-02-10 09:42:33.184048 | orchestrator | 2025-02-10 09:42:33.184060 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-02-10 09:42:33.184072 | orchestrator | Monday 10 February 2025 09:42:05 +0000 (0:00:00.287) 0:01:48.411 ******* 2025-02-10 09:42:33.184084 | orchestrator | 2025-02-10 09:42:33.184127 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-02-10 09:42:33.184140 | orchestrator | Monday 10 February 2025 09:42:05 +0000 (0:00:00.139) 0:01:48.550 ******* 2025-02-10 09:42:33.184152 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:42:33.184164 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:42:33.184177 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:42:33.184189 | orchestrator | 2025-02-10 09:42:33.184207 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-02-10 09:42:36.234726 | orchestrator | Monday 10 February 2025 09:42:19 +0000 (0:00:13.994) 0:02:02.544 ******* 2025-02-10 09:42:36.234994 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:42:36.235022 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:42:36.235038 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:42:36.235052 | orchestrator | 2025-02-10 09:42:36.235068 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:42:36.235084 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-10 09:42:36.235144 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-10 09:42:36.235159 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-10 09:42:36.235173 | orchestrator | 2025-02-10 09:42:36.235187 | orchestrator | 2025-02-10 09:42:36.235201 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:42:36.235215 | orchestrator | Monday 10 February 2025 09:42:32 +0000 (0:00:12.806) 0:02:15.350 ******* 2025-02-10 09:42:36.235229 | orchestrator | =============================================================================== 2025-02-10 09:42:36.235243 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 19.08s 2025-02-10 09:42:36.235257 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 13.99s 2025-02-10 09:42:36.235271 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 12.81s 2025-02-10 09:42:36.235284 | orchestrator | magnum : Copying over magnum.conf -------------------------------------- 12.44s 2025-02-10 09:42:36.235298 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 8.03s 2025-02-10 09:42:36.235312 | orchestrator | magnum : Check magnum containers ---------------------------------------- 5.51s 2025-02-10 09:42:36.235326 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.60s 2025-02-10 09:42:36.235340 | orchestrator | magnum : Creating Magnum database --------------------------------------- 4.36s 2025-02-10 09:42:36.235354 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 4.10s 2025-02-10 09:42:36.235367 | orchestrator | magnum : Copying over config.json files for services -------------------- 4.00s 2025-02-10 09:42:36.235405 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.61s 2025-02-10 09:42:36.235420 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 3.53s 2025-02-10 09:42:36.235435 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.50s 2025-02-10 09:42:36.235449 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 3.36s 2025-02-10 09:42:36.235464 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 3.35s 2025-02-10 09:42:36.235479 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.34s 2025-02-10 09:42:36.235493 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.11s 2025-02-10 09:42:36.235507 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.04s 2025-02-10 09:42:36.235521 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.02s 2025-02-10 09:42:36.235535 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.62s 2025-02-10 09:42:36.235583 | orchestrator | 2025-02-10 09:42:33 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:42:36.235600 | orchestrator | 2025-02-10 09:42:33 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:42:36.235637 | orchestrator | 2025-02-10 09:42:36 | INFO  | Task f640ac78-495d-4f68-b1c0-1cfe4252168c is in state STARTED 2025-02-10 09:42:36.235764 | orchestrator | 2025-02-10 09:42:36 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:42:36.235792 | orchestrator | 2025-02-10 09:42:36 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:42:36.237124 | orchestrator | 2025-02-10 09:42:36 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:42:39.274286 | orchestrator | 2025-02-10 09:42:36 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:42:39.274450 | orchestrator | 2025-02-10 09:42:39 | INFO  | Task f640ac78-495d-4f68-b1c0-1cfe4252168c is in state STARTED 2025-02-10 09:42:42.306514 | orchestrator | 2025-02-10 09:42:39 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:42:42.306822 | orchestrator | 2025-02-10 09:42:39 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:42:42.306859 | orchestrator | 2025-02-10 09:42:39 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:42:42.306883 | orchestrator | 2025-02-10 09:42:39 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:42:42.306931 | orchestrator | 2025-02-10 09:42:42 | INFO  | Task f640ac78-495d-4f68-b1c0-1cfe4252168c is in state STARTED 2025-02-10 09:42:42.307484 | orchestrator | 2025-02-10 09:42:42 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:42:42.307545 | orchestrator | 2025-02-10 09:42:42 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:42:42.307584 | orchestrator | 2025-02-10 09:42:42 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:42:45.353867 | orchestrator | 2025-02-10 09:42:42 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:42:45.354082 | orchestrator | 2025-02-10 09:42:45 | INFO  | Task f640ac78-495d-4f68-b1c0-1cfe4252168c is in state SUCCESS 2025-02-10 09:42:45.354544 | orchestrator | 2025-02-10 09:42:45 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:42:45.355740 | orchestrator | 2025-02-10 09:42:45 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:42:45.357650 | orchestrator | 2025-02-10 09:42:45 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:42:45.359635 | orchestrator | 2025-02-10 09:42:45 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:42:45.359903 | orchestrator | 2025-02-10 09:42:45 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:42:48.411718 | orchestrator | 2025-02-10 09:42:48 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:42:48.416942 | orchestrator | 2025-02-10 09:42:48 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:42:48.420914 | orchestrator | 2025-02-10 09:42:48 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:42:48.422655 | orchestrator | 2025-02-10 09:42:48 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:42:48.423172 | orchestrator | 2025-02-10 09:42:48 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:42:51.480849 | orchestrator | 2025-02-10 09:42:51 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:42:51.482550 | orchestrator | 2025-02-10 09:42:51 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:42:51.482602 | orchestrator | 2025-02-10 09:42:51 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:42:51.484788 | orchestrator | 2025-02-10 09:42:51 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state STARTED 2025-02-10 09:42:54.534760 | orchestrator | 2025-02-10 09:42:51 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:42:54.534876 | orchestrator | 2025-02-10 09:42:54 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:42:54.541231 | orchestrator | 2025-02-10 09:42:54 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:42:54.541358 | orchestrator | 2025-02-10 09:42:54 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:42:54.541694 | orchestrator | 2025-02-10 09:42:54.541709 | orchestrator | 2025-02-10 09:42:54.541717 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:42:54.541726 | orchestrator | 2025-02-10 09:42:54.541775 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:42:54.541786 | orchestrator | Monday 10 February 2025 09:42:02 +0000 (0:00:00.655) 0:00:00.655 ******* 2025-02-10 09:42:54.541795 | orchestrator | ok: [testbed-manager] 2025-02-10 09:42:54.541806 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:42:54.541853 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:42:54.541864 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:42:54.541872 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:42:54.541881 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:42:54.541889 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:42:54.541897 | orchestrator | 2025-02-10 09:42:54.541906 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:42:54.541914 | orchestrator | Monday 10 February 2025 09:42:04 +0000 (0:00:01.812) 0:00:02.468 ******* 2025-02-10 09:42:54.541924 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-02-10 09:42:54.541933 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-02-10 09:42:54.541941 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-02-10 09:42:54.541949 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-02-10 09:42:54.541958 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-02-10 09:42:54.541966 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-02-10 09:42:54.542008 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-02-10 09:42:54.542094 | orchestrator | 2025-02-10 09:42:54.542157 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-02-10 09:42:54.542467 | orchestrator | 2025-02-10 09:42:54.542594 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-02-10 09:42:54.542618 | orchestrator | Monday 10 February 2025 09:42:07 +0000 (0:00:02.571) 0:00:05.039 ******* 2025-02-10 09:42:54.542634 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:42:54.542645 | orchestrator | 2025-02-10 09:42:54.542654 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-02-10 09:42:54.542662 | orchestrator | Monday 10 February 2025 09:42:10 +0000 (0:00:03.432) 0:00:08.471 ******* 2025-02-10 09:42:54.542671 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2025-02-10 09:42:54.542679 | orchestrator | 2025-02-10 09:42:54.542687 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-02-10 09:42:54.542694 | orchestrator | Monday 10 February 2025 09:42:15 +0000 (0:00:04.785) 0:00:13.257 ******* 2025-02-10 09:42:54.542727 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-02-10 09:42:54.542737 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-02-10 09:42:54.542745 | orchestrator | 2025-02-10 09:42:54.542943 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-02-10 09:42:54.542958 | orchestrator | Monday 10 February 2025 09:42:22 +0000 (0:00:06.731) 0:00:19.988 ******* 2025-02-10 09:42:54.542966 | orchestrator | ok: [testbed-manager] => (item=service) 2025-02-10 09:42:54.542975 | orchestrator | 2025-02-10 09:42:54.542984 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-02-10 09:42:54.542992 | orchestrator | Monday 10 February 2025 09:42:24 +0000 (0:00:02.956) 0:00:22.945 ******* 2025-02-10 09:42:54.543033 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-02-10 09:42:54.543043 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2025-02-10 09:42:54.543051 | orchestrator | 2025-02-10 09:42:54.543059 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-02-10 09:42:54.543067 | orchestrator | Monday 10 February 2025 09:42:28 +0000 (0:00:03.525) 0:00:26.470 ******* 2025-02-10 09:42:54.543074 | orchestrator | ok: [testbed-manager] => (item=admin) 2025-02-10 09:42:54.543276 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2025-02-10 09:42:54.543286 | orchestrator | 2025-02-10 09:42:54.543294 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-02-10 09:42:54.543302 | orchestrator | Monday 10 February 2025 09:42:36 +0000 (0:00:08.066) 0:00:34.537 ******* 2025-02-10 09:42:54.543310 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2025-02-10 09:42:54.543318 | orchestrator | 2025-02-10 09:42:54.543326 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:42:54.543334 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:42:54.543343 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:42:54.543351 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:42:54.543359 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:42:54.543367 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:42:54.543419 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:42:54.543430 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:42:54.543438 | orchestrator | 2025-02-10 09:42:54.543445 | orchestrator | 2025-02-10 09:42:54.543454 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:42:54.543477 | orchestrator | Monday 10 February 2025 09:42:42 +0000 (0:00:06.019) 0:00:40.556 ******* 2025-02-10 09:42:54.543487 | orchestrator | =============================================================================== 2025-02-10 09:42:54.543501 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 8.07s 2025-02-10 09:42:54.543517 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.73s 2025-02-10 09:42:54.543531 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 6.02s 2025-02-10 09:42:54.543545 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.79s 2025-02-10 09:42:54.543560 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.53s 2025-02-10 09:42:54.543589 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 3.43s 2025-02-10 09:42:54.543603 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.96s 2025-02-10 09:42:54.543617 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.57s 2025-02-10 09:42:54.543630 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.81s 2025-02-10 09:42:54.543643 | orchestrator | 2025-02-10 09:42:54.543656 | orchestrator | 2025-02-10 09:42:54.543668 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:42:54.543680 | orchestrator | 2025-02-10 09:42:54.543693 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:42:54.543706 | orchestrator | Monday 10 February 2025 09:36:18 +0000 (0:00:00.313) 0:00:00.313 ******* 2025-02-10 09:42:54.543718 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:42:54.543732 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:42:54.543745 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:42:54.543759 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:42:54.543772 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:42:54.543784 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:42:54.544038 | orchestrator | 2025-02-10 09:42:54.544053 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:42:54.544067 | orchestrator | Monday 10 February 2025 09:36:19 +0000 (0:00:01.048) 0:00:01.361 ******* 2025-02-10 09:42:54.544081 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-02-10 09:42:54.544094 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-02-10 09:42:54.544134 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-02-10 09:42:54.544149 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-02-10 09:42:54.544158 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-02-10 09:42:54.544166 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-02-10 09:42:54.544174 | orchestrator | 2025-02-10 09:42:54.544182 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-02-10 09:42:54.544190 | orchestrator | 2025-02-10 09:42:54.544198 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-02-10 09:42:54.544206 | orchestrator | Monday 10 February 2025 09:36:20 +0000 (0:00:01.064) 0:00:02.425 ******* 2025-02-10 09:42:54.544214 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:42:54.544223 | orchestrator | 2025-02-10 09:42:54.544231 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-02-10 09:42:54.544239 | orchestrator | Monday 10 February 2025 09:36:22 +0000 (0:00:01.989) 0:00:04.415 ******* 2025-02-10 09:42:54.544247 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:42:54.544255 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:42:54.544263 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:42:54.544271 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:42:54.544279 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:42:54.544287 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:42:54.544294 | orchestrator | 2025-02-10 09:42:54.544302 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-02-10 09:42:54.544311 | orchestrator | Monday 10 February 2025 09:36:25 +0000 (0:00:02.394) 0:00:06.809 ******* 2025-02-10 09:42:54.544318 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:42:54.544326 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:42:54.544334 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:42:54.544342 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:42:54.544354 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:42:54.544361 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:42:54.544369 | orchestrator | 2025-02-10 09:42:54.544378 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-02-10 09:42:54.544386 | orchestrator | Monday 10 February 2025 09:36:27 +0000 (0:00:02.098) 0:00:08.907 ******* 2025-02-10 09:42:54.544403 | orchestrator | ok: [testbed-node-0] => { 2025-02-10 09:42:54.544411 | orchestrator |  "changed": false, 2025-02-10 09:42:54.544419 | orchestrator |  "msg": "All assertions passed" 2025-02-10 09:42:54.544428 | orchestrator | } 2025-02-10 09:42:54.544534 | orchestrator | ok: [testbed-node-1] => { 2025-02-10 09:42:54.544545 | orchestrator |  "changed": false, 2025-02-10 09:42:54.544575 | orchestrator |  "msg": "All assertions passed" 2025-02-10 09:42:54.544583 | orchestrator | } 2025-02-10 09:42:54.544592 | orchestrator | ok: [testbed-node-2] => { 2025-02-10 09:42:54.544600 | orchestrator |  "changed": false, 2025-02-10 09:42:54.544793 | orchestrator |  "msg": "All assertions passed" 2025-02-10 09:42:54.544806 | orchestrator | } 2025-02-10 09:42:54.544815 | orchestrator | ok: [testbed-node-3] => { 2025-02-10 09:42:54.544824 | orchestrator |  "changed": false, 2025-02-10 09:42:54.544834 | orchestrator |  "msg": "All assertions passed" 2025-02-10 09:42:54.544842 | orchestrator | } 2025-02-10 09:42:54.544852 | orchestrator | ok: [testbed-node-4] => { 2025-02-10 09:42:54.544861 | orchestrator |  "changed": false, 2025-02-10 09:42:54.544870 | orchestrator |  "msg": "All assertions passed" 2025-02-10 09:42:54.544879 | orchestrator | } 2025-02-10 09:42:54.544888 | orchestrator | ok: [testbed-node-5] => { 2025-02-10 09:42:54.544948 | orchestrator |  "changed": false, 2025-02-10 09:42:54.544960 | orchestrator |  "msg": "All assertions passed" 2025-02-10 09:42:54.544969 | orchestrator | } 2025-02-10 09:42:54.544978 | orchestrator | 2025-02-10 09:42:54.544987 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-02-10 09:42:54.544996 | orchestrator | Monday 10 February 2025 09:36:28 +0000 (0:00:01.373) 0:00:10.281 ******* 2025-02-10 09:42:54.545004 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:42:54.545012 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:42:54.545020 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:42:54.545028 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:42:54.545077 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:42:54.545086 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:42:54.545094 | orchestrator | 2025-02-10 09:42:54.545124 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-02-10 09:42:54.545209 | orchestrator | Monday 10 February 2025 09:36:29 +0000 (0:00:01.243) 0:00:11.524 ******* 2025-02-10 09:42:54.545224 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-02-10 09:42:54.545237 | orchestrator | 2025-02-10 09:42:54.545250 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-02-10 09:42:54.545258 | orchestrator | Monday 10 February 2025 09:36:33 +0000 (0:00:03.426) 0:00:14.950 ******* 2025-02-10 09:42:54.545499 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-02-10 09:42:54.545509 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-02-10 09:42:54.545517 | orchestrator | 2025-02-10 09:42:54.545525 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-02-10 09:42:54.545540 | orchestrator | Monday 10 February 2025 09:36:39 +0000 (0:00:06.136) 0:00:21.086 ******* 2025-02-10 09:42:54.545548 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-02-10 09:42:54.545556 | orchestrator | 2025-02-10 09:42:54.545564 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-02-10 09:42:54.545572 | orchestrator | Monday 10 February 2025 09:36:42 +0000 (0:00:03.457) 0:00:24.544 ******* 2025-02-10 09:42:54.545580 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-02-10 09:42:54.545588 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-02-10 09:42:54.545596 | orchestrator | 2025-02-10 09:42:54.545604 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-02-10 09:42:54.545612 | orchestrator | Monday 10 February 2025 09:36:47 +0000 (0:00:04.306) 0:00:28.850 ******* 2025-02-10 09:42:54.545620 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-02-10 09:42:54.545638 | orchestrator | 2025-02-10 09:42:54.545683 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-02-10 09:42:54.545747 | orchestrator | Monday 10 February 2025 09:36:51 +0000 (0:00:03.956) 0:00:32.807 ******* 2025-02-10 09:42:54.545758 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-02-10 09:42:54.545766 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-02-10 09:42:54.545775 | orchestrator | 2025-02-10 09:42:54.545813 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-02-10 09:42:54.545823 | orchestrator | Monday 10 February 2025 09:36:59 +0000 (0:00:08.481) 0:00:41.289 ******* 2025-02-10 09:42:54.545832 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:42:54.545840 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:42:54.545848 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:42:54.546036 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:42:54.546056 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:42:54.546065 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:42:54.546072 | orchestrator | 2025-02-10 09:42:54.546081 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-02-10 09:42:54.546089 | orchestrator | Monday 10 February 2025 09:37:00 +0000 (0:00:00.823) 0:00:42.112 ******* 2025-02-10 09:42:54.546096 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:42:54.546125 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:42:54.546160 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:42:54.546169 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:42:54.546215 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:42:54.546224 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:42:54.546232 | orchestrator | 2025-02-10 09:42:54.546240 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-02-10 09:42:54.546288 | orchestrator | Monday 10 February 2025 09:37:05 +0000 (0:00:04.751) 0:00:46.864 ******* 2025-02-10 09:42:54.546298 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:42:54.546306 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:42:54.546315 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:42:54.546323 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:42:54.546330 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:42:54.546339 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:42:54.546346 | orchestrator | 2025-02-10 09:42:54.546355 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-02-10 09:42:54.546363 | orchestrator | Monday 10 February 2025 09:37:07 +0000 (0:00:01.866) 0:00:48.731 ******* 2025-02-10 09:42:54.546371 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:42:54.546379 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:42:54.546387 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:42:54.546395 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:42:54.546404 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:42:54.546411 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:42:54.546419 | orchestrator | 2025-02-10 09:42:54.546428 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-02-10 09:42:54.546435 | orchestrator | Monday 10 February 2025 09:37:11 +0000 (0:00:04.516) 0:00:53.247 ******* 2025-02-10 09:42:54.546499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:42:54.546525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.546534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.546544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.546553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.546749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.546765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.546782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.546792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.546803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.546812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.546820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.546868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.546887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-10 09:42:54.546897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.546908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.546917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.546964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:42:54.546982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.546992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.547001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.547009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.547018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.547068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.547080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.547088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.547188 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.547209 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.547219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.547283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.547296 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.547305 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.547313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.547321 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.547542 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.547571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.547580 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.547589 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.547597 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.547606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:42:54.547615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.547670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.547684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.547694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.547704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.547713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-10 09:42:54.547736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.547814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.547838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.547854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.547869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.548200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.548298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.548315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.548325 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-10 09:42:54.548335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.548345 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.548355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.548372 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.548428 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.548442 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.548452 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.548463 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.548473 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.548489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-10 09:42:54.548578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.548595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.548605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.548616 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.549706 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.549803 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.549818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/conf2025-02-10 09:42:54 | INFO  | Task 0cfb9aab-dcda-43b5-a82b-95ad00a9ff2d is in state SUCCESS 2025-02-10 09:42:54.549830 | orchestrator | ig_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.549841 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.550137 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.550167 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.550235 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.550251 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.550262 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.550273 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.550509 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.550524 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.550594 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.550615 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.550633 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.550649 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.550665 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.550686 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-10 09:42:54.550749 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.550763 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.550773 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.550784 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.550794 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.550810 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.550855 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.550866 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-10 09:42:54.550877 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.550886 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.550905 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.550916 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.551219 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.551254 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.551266 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.551277 | orchestrator | 2025-02-10 09:42:54.551288 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-02-10 09:42:54.551307 | orchestrator | Monday 10 February 2025 09:37:19 +0000 (0:00:08.045) 0:01:01.292 ******* 2025-02-10 09:42:54.551317 | orchestrator | [WARNING]: Skipped 2025-02-10 09:42:54.551328 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-02-10 09:42:54.551339 | orchestrator | due to this access issue: 2025-02-10 09:42:54.551355 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-02-10 09:42:54.551365 | orchestrator | a directory 2025-02-10 09:42:54.551375 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-10 09:42:54.551385 | orchestrator | 2025-02-10 09:42:54.551395 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-02-10 09:42:54.551405 | orchestrator | Monday 10 February 2025 09:37:21 +0000 (0:00:01.755) 0:01:03.048 ******* 2025-02-10 09:42:54.551416 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:42:54.551427 | orchestrator | 2025-02-10 09:42:54.551437 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-02-10 09:42:54.551447 | orchestrator | Monday 10 February 2025 09:37:23 +0000 (0:00:02.344) 0:01:05.392 ******* 2025-02-10 09:42:54.551458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:42:54.551528 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-10 09:42:54.551545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:42:54.551564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:42:54.551638 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-10 09:42:54.551651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-10 09:42:54.551663 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-10 09:42:54.554326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-10 09:42:54.554361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-10 09:42:54.554403 | orchestrator | 2025-02-10 09:42:54.554418 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-02-10 09:42:54.554432 | orchestrator | Monday 10 February 2025 09:37:31 +0000 (0:00:07.386) 0:01:12.779 ******* 2025-02-10 09:42:54.554445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.554458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.554471 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:42:54.554486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.554510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.554530 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:42:54.554544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.554557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.554569 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:42:54.554582 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.554595 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:42:54.554608 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.554621 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:42:54.554641 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.554660 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:42:54.554673 | orchestrator | 2025-02-10 09:42:54.554685 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-02-10 09:42:54.554698 | orchestrator | Monday 10 February 2025 09:37:35 +0000 (0:00:04.632) 0:01:17.412 ******* 2025-02-10 09:42:54.554710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.554723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.554736 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:42:54.554749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.554762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.554775 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:42:54.554794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.554813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.554826 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:42:54.554839 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.554854 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:42:54.554876 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.554896 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:42:54.554916 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.554946 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:42:54.554967 | orchestrator | 2025-02-10 09:42:54.554988 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-02-10 09:42:54.555018 | orchestrator | Monday 10 February 2025 09:37:41 +0000 (0:00:05.258) 0:01:22.671 ******* 2025-02-10 09:42:54.555040 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:42:54.555062 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:42:54.555083 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:42:54.555130 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:42:54.555153 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:42:54.555175 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:42:54.555195 | orchestrator | 2025-02-10 09:42:54.555215 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-02-10 09:42:54.555252 | orchestrator | Monday 10 February 2025 09:37:47 +0000 (0:00:06.290) 0:01:28.961 ******* 2025-02-10 09:42:54.555273 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:42:54.555294 | orchestrator | 2025-02-10 09:42:54.555315 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-02-10 09:42:54.555336 | orchestrator | Monday 10 February 2025 09:37:47 +0000 (0:00:00.357) 0:01:29.319 ******* 2025-02-10 09:42:54.555357 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:42:54.555378 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:42:54.555398 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:42:54.555420 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:42:54.555443 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:42:54.555465 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:42:54.555486 | orchestrator | 2025-02-10 09:42:54.555502 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-02-10 09:42:54.555514 | orchestrator | Monday 10 February 2025 09:37:49 +0000 (0:00:01.834) 0:01:31.153 ******* 2025-02-10 09:42:54.555527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.555542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.555555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.555589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.555604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.555617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.555631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.555644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.555657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.555678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.555725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.555746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.555760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.555773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.555786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.555818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.555838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.555851 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:42:54.555864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.555876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.555889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.555909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.555936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.555951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.555964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.555976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.555989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.556008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.556028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.556056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.556070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.556083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.556096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.556153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.556174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.556188 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:42:54.556201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.556214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.556226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.556246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.556273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.556293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.556316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.556338 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.556370 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.556392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.556421 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.556459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.556483 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.556505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.556537 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.556563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.556607 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.556633 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.556655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.556676 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.556697 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.556710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.556729 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.556753 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.556767 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.556790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.556803 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.556816 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.556847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.556862 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.556875 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.556896 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.556909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.556930 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.556950 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.556963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.556987 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.557000 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:42:54.557013 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.557026 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.557038 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.557067 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.557081 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.557147 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.557164 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:42:54.557177 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.557200 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.557221 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.557235 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.557255 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.557273 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.557286 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.557307 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.557329 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.557343 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.557362 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.557375 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.557389 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.557401 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.557419 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.557442 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.557469 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.557491 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.557512 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:42:54.557533 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.557568 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.557590 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.557624 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.557657 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.557682 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.557696 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.557708 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:42:54.557721 | orchestrator | 2025-02-10 09:42:54.557734 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-02-10 09:42:54.557746 | orchestrator | Monday 10 February 2025 09:37:55 +0000 (0:00:06.080) 0:01:37.234 ******* 2025-02-10 09:42:54.557759 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.557785 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.557799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.557812 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.557824 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.557837 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.557857 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.557888 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.557902 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.557914 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.557927 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.557940 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.557974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:42:54.557988 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.558001 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.558014 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.558164 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.558217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.558232 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.558246 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.558259 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.558272 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.558295 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.558334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.558348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:42:54.558362 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.558375 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.558388 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.558410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.558445 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.558459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.558473 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.558486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.558499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.558526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.558559 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.558583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.558604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.558626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.558661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.558693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.558736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.558761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.558784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.558806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.558828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.558860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.558920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.558939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.558953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.558966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.558979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.559004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.559027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:42:54.559058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.559073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.559086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.559129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.559171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.559213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.559234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.559256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.559276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.559299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.559331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.559365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.559410 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-10 09:42:54.559427 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.559440 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.559453 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.559483 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.559497 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.559532 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.559547 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.559560 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-10 09:42:54.559580 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-10 09:42:54.559605 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.559629 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.559669 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.559693 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.559713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-10 09:42:54.559748 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.559762 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.559779 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.559821 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.559844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.559882 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.559916 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.559939 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.559983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.560008 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.560029 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.560083 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.560133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.560150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-10 09:42:54.560192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.560208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.560221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.560242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-10 09:42:54.560255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.560292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.560307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.560320 | orchestrator | 2025-02-10 09:42:54.560333 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-02-10 09:42:54.560346 | orchestrator | Monday 10 February 2025 09:38:02 +0000 (0:00:06.520) 0:01:43.754 ******* 2025-02-10 09:42:54.560359 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.560378 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.560392 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.560420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.560433 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.560453 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.560476 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.560490 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.560503 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.560531 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.560545 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.560565 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.560586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.560600 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.560613 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.560648 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.560663 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.560682 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.560699 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.560717 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.560730 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.560757 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.560777 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.560791 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.560804 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.560825 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.560838 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.560875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:42:54.560896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.560909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.560931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.560954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.560995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.561027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.561050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.561067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.561091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.561128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.561142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.561172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.561194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:42:54.561207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.561220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.561242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.561270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:42:54.561291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.561313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.561327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.561340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.561353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.561385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.561400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.561413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.561426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.561448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.561462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.561497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.561511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.561525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.561538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.561561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.561574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.561593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.561621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.561636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.561649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.561677 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-10 09:42:54.561699 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.561721 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.561777 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.561793 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.561818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.561832 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.561845 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.561864 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-10 09:42:54.561892 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.561916 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.561929 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.561942 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.561955 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.561982 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.562010 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.562054 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-10 09:42:54.562068 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.562081 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.562094 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.562179 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.562222 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.562238 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.562251 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.562264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-10 09:42:54.562292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.562320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.562335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.562348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-10 09:42:54.562361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.562390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.562404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.562435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-10 09:42:54.562458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.562480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.562501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.562533 | orchestrator | 2025-02-10 09:42:54.562554 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-02-10 09:42:54.562572 | orchestrator | Monday 10 February 2025 09:38:14 +0000 (0:00:12.857) 0:01:56.612 ******* 2025-02-10 09:42:54.562583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:42:54.562610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.562631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.562642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.562652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.562669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.562680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.562703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.562715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.562736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.562747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.562763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.562774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.562797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.562816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.562827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.562843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.562854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:42:54.562876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.562895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.562906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.562922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.562933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.562944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.562954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.562977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.562996 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.563014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.563025 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.563035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.563059 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.563071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.563088 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.563135 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.563154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.563174 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.563212 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.563226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.563247 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.563266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.563277 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.563287 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.563311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.563331 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.563348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.563359 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.563369 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.563380 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.563410 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.563422 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.563441 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.563452 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:42:54.563462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:42:54.563473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.563502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.563515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.563530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.563541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.563555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.563566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.563577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.563611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.563638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.563656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.563672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.563690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.563717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.563752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.563780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.563799 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.563816 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.563844 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.563881 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.563908 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.563919 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.563930 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.563940 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.563951 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.563982 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.564000 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.564011 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.564021 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.564032 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.564049 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.564073 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.564090 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.564125 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:42:54.564136 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.564147 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.564158 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.564189 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.564207 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.564218 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.564229 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.564240 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.564251 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.564262 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.564302 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.564314 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.564325 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.564336 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.564354 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.564371 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.564394 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.564406 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:42:54.564416 | orchestrator | 2025-02-10 09:42:54.564427 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-02-10 09:42:54.564438 | orchestrator | Monday 10 February 2025 09:38:21 +0000 (0:00:06.333) 0:02:02.946 ******* 2025-02-10 09:42:54.564448 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:42:54.564458 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:42:54.564469 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:42:54.564479 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:42:54.564489 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:42:54.564499 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:42:54.564509 | orchestrator | 2025-02-10 09:42:54.564519 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-02-10 09:42:54.564529 | orchestrator | Monday 10 February 2025 09:38:28 +0000 (0:00:07.322) 0:02:10.268 ******* 2025-02-10 09:42:54.564540 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.564551 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.564566 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.564596 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.564609 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.564620 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.564631 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.564642 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.564660 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.564691 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.564704 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.564715 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.564726 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.564743 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.564760 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.564771 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.564795 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.564807 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:42:54.564818 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.564829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.564852 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.564863 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.564887 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.564899 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.564910 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.564921 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.564937 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.564948 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.564980 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.564992 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.565003 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.565014 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.565039 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.565051 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.565076 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.565087 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:42:54.565143 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.565157 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.565174 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.565194 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.565218 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.565228 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.565238 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.565247 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.565261 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.565270 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.565287 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.565308 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.565317 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.565327 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.565347 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.565356 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.565377 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.565397 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:42:54.565407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:42:54.565416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.565430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.565439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.565455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.565477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.565487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.565503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.565513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.565522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.565537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.565558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.565568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:42:54.565582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.565592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.565601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.565617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.565638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.565654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.565663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.565679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.565688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.565708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.565718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:42:54.565735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.565744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.565753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.565768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.565788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.565798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.565812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.565831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.565847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.565862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.565890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.565915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.565931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.565946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.565960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.565969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.565986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.566007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.566064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.566083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.566092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.566124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.566134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.566161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.566179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.566188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.566197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.566206 | orchestrator | 2025-02-10 09:42:54.566215 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-02-10 09:42:54.566223 | orchestrator | Monday 10 February 2025 09:38:36 +0000 (0:00:07.596) 0:02:17.865 ******* 2025-02-10 09:42:54.566232 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:42:54.566240 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:42:54.566249 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:42:54.566257 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:42:54.566266 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:42:54.566274 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:42:54.566283 | orchestrator | 2025-02-10 09:42:54.566291 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-02-10 09:42:54.566300 | orchestrator | Monday 10 February 2025 09:38:40 +0000 (0:00:03.857) 0:02:21.722 ******* 2025-02-10 09:42:54.566309 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:42:54.566326 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:42:54.566335 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:42:54.566344 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:42:54.566352 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:42:54.566360 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:42:54.566368 | orchestrator | 2025-02-10 09:42:54.566377 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-02-10 09:42:54.566386 | orchestrator | Monday 10 February 2025 09:38:44 +0000 (0:00:04.195) 0:02:25.917 ******* 2025-02-10 09:42:54.566394 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:42:54.566402 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:42:54.566411 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:42:54.566419 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:42:54.566427 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:42:54.566436 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:42:54.566444 | orchestrator | 2025-02-10 09:42:54.566464 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-02-10 09:42:54.566475 | orchestrator | Monday 10 February 2025 09:38:50 +0000 (0:00:06.239) 0:02:32.156 ******* 2025-02-10 09:42:54.566484 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:42:54.566493 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:42:54.566501 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:42:54.566510 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:42:54.566518 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:42:54.566527 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:42:54.566535 | orchestrator | 2025-02-10 09:42:54.566543 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-02-10 09:42:54.566552 | orchestrator | Monday 10 February 2025 09:38:58 +0000 (0:00:07.582) 0:02:39.739 ******* 2025-02-10 09:42:54.566560 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:42:54.566569 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:42:54.566577 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:42:54.566585 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:42:54.566594 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:42:54.566603 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:42:54.566611 | orchestrator | 2025-02-10 09:42:54.566620 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-02-10 09:42:54.566628 | orchestrator | Monday 10 February 2025 09:39:04 +0000 (0:00:06.248) 0:02:45.987 ******* 2025-02-10 09:42:54.566637 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:42:54.566645 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:42:54.566653 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:42:54.566662 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:42:54.566670 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:42:54.566678 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:42:54.566687 | orchestrator | 2025-02-10 09:42:54.566699 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-02-10 09:42:54.566708 | orchestrator | Monday 10 February 2025 09:39:12 +0000 (0:00:08.149) 0:02:54.137 ******* 2025-02-10 09:42:54.566716 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-02-10 09:42:54.566725 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:42:54.566735 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-02-10 09:42:54.566743 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:42:54.566752 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-02-10 09:42:54.566760 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:42:54.566769 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-02-10 09:42:54.566778 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:42:54.566787 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-02-10 09:42:54.566801 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:42:54.566810 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-02-10 09:42:54.566818 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:42:54.566827 | orchestrator | 2025-02-10 09:42:54.566835 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-02-10 09:42:54.566851 | orchestrator | Monday 10 February 2025 09:39:16 +0000 (0:00:03.654) 0:02:57.792 ******* 2025-02-10 09:42:54.566860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.566869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.566893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.566912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.566922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.566978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.566995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.567010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.567047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.567063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.567090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.567130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.567145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.567160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.567192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.567219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.567234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.567258 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:42:54.567274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.567289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.567324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.567336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.567354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.567378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.567393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.567407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.567437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.567455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.567472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.567501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.567512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.567520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.567543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.567560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.567574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.567583 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:42:54.567592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.567601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.567610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.567640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.567656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.567665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.567674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.567683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.567692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.567713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.567728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.567744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.567753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.567762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.567771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.567798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.567813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.567822 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:42:54.567831 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.567840 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.567849 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.567870 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.567891 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.567900 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.567910 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.567919 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.567927 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.567947 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.567968 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.567978 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.567987 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.567997 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.568005 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.568032 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.568047 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.568056 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:42:54.568065 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.568073 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.568082 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.568091 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.568267 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.568294 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.568304 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.568314 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.568323 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.568332 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.568369 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.568380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.568389 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.568399 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.568407 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.568424 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.568449 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.568460 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:42:54.568481 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.568490 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.568505 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.568514 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.568538 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.568548 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.568564 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.568572 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.568581 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.568589 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.568613 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.568629 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.568638 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.568646 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.568655 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.568673 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.568692 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.568701 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:42:54.568709 | orchestrator | 2025-02-10 09:42:54.568718 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-02-10 09:42:54.568726 | orchestrator | Monday 10 February 2025 09:39:21 +0000 (0:00:05.641) 0:03:03.433 ******* 2025-02-10 09:42:54.568734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.568743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.568751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.568764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.568784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.568794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.568809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.568818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.568830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.568849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.568858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.568873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.568882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.568891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.568904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.568912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.568931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.568940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.568955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.568964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.568972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.568987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.568996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.569022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.569032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.569041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.569053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.569062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.569076 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:42:54.569095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.569118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.569127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.569135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.569156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.569164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.569173 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:42:54.569192 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.569201 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.569210 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.569225 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.569233 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.569252 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.569268 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.569277 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.569286 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.569299 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.569308 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.569316 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.569343 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.569353 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.569362 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.569375 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.569389 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.569397 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:42:54.569416 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.569426 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.569434 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.569448 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.569463 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.569472 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.569491 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.569501 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.569510 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.569523 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.569532 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.569540 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.569556 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.569576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.569590 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.569599 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.569614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.569622 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.569636 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.569649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.569657 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:42:54.569666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.569674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.569683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.569698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.569716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.569731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.569740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.569749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.569757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.569771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.569792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.569805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.569814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.569822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.569832 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:42:54.569846 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.569866 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.569879 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.569888 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.569897 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.569906 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.569914 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.569938 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.569953 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.569962 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.569970 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.569978 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.569993 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.570002 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.570040 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.570052 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.570061 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.570069 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:42:54.570077 | orchestrator | 2025-02-10 09:42:54.570085 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-02-10 09:42:54.570093 | orchestrator | Monday 10 February 2025 09:39:26 +0000 (0:00:04.693) 0:03:08.127 ******* 2025-02-10 09:42:54.570122 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:42:54.570131 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:42:54.570139 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:42:54.570147 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:42:54.570156 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:42:54.570165 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:42:54.570173 | orchestrator | 2025-02-10 09:42:54.570182 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-02-10 09:42:54.570190 | orchestrator | Monday 10 February 2025 09:39:32 +0000 (0:00:05.672) 0:03:13.799 ******* 2025-02-10 09:42:54.570198 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:42:54.570206 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:42:54.570215 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:42:54.570223 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:42:54.570230 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:42:54.570239 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:42:54.570247 | orchestrator | 2025-02-10 09:42:54.570260 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-02-10 09:42:54.570269 | orchestrator | Monday 10 February 2025 09:39:43 +0000 (0:00:11.151) 0:03:24.951 ******* 2025-02-10 09:42:54.570277 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:42:54.570285 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:42:54.570293 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:42:54.570300 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:42:54.570309 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:42:54.570317 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:42:54.570325 | orchestrator | 2025-02-10 09:42:54.570334 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-02-10 09:42:54.570342 | orchestrator | Monday 10 February 2025 09:39:46 +0000 (0:00:03.038) 0:03:27.989 ******* 2025-02-10 09:42:54.570350 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:42:54.570357 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:42:54.570365 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:42:54.570373 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:42:54.570381 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:42:54.570389 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:42:54.570397 | orchestrator | 2025-02-10 09:42:54.570406 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-02-10 09:42:54.570413 | orchestrator | Monday 10 February 2025 09:39:49 +0000 (0:00:03.550) 0:03:31.540 ******* 2025-02-10 09:42:54.570422 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:42:54.570430 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:42:54.570437 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:42:54.570457 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:42:54.570465 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:42:54.570473 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:42:54.570481 | orchestrator | 2025-02-10 09:42:54.570490 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-02-10 09:42:54.570497 | orchestrator | Monday 10 February 2025 09:40:00 +0000 (0:00:10.408) 0:03:41.948 ******* 2025-02-10 09:42:54.570505 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:42:54.570513 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:42:54.570521 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:42:54.570529 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:42:54.570536 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:42:54.570544 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:42:54.570552 | orchestrator | 2025-02-10 09:42:54.570560 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-02-10 09:42:54.570568 | orchestrator | Monday 10 February 2025 09:40:03 +0000 (0:00:03.372) 0:03:45.321 ******* 2025-02-10 09:42:54.570576 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:42:54.570584 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:42:54.570592 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:42:54.570600 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:42:54.570608 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:42:54.570616 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:42:54.570624 | orchestrator | 2025-02-10 09:42:54.570632 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-02-10 09:42:54.570640 | orchestrator | Monday 10 February 2025 09:40:07 +0000 (0:00:03.698) 0:03:49.019 ******* 2025-02-10 09:42:54.570648 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:42:54.570656 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:42:54.570664 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:42:54.570672 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:42:54.570680 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:42:54.570687 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:42:54.570695 | orchestrator | 2025-02-10 09:42:54.570703 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-02-10 09:42:54.570711 | orchestrator | Monday 10 February 2025 09:40:12 +0000 (0:00:05.552) 0:03:54.572 ******* 2025-02-10 09:42:54.570728 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:42:54.570736 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:42:54.570744 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:42:54.570751 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:42:54.570759 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:42:54.570767 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:42:54.570775 | orchestrator | 2025-02-10 09:42:54.570783 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-02-10 09:42:54.570790 | orchestrator | Monday 10 February 2025 09:40:15 +0000 (0:00:02.962) 0:03:57.534 ******* 2025-02-10 09:42:54.570798 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:42:54.570811 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:42:54.570819 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:42:54.570827 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:42:54.570835 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:42:54.570843 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:42:54.570850 | orchestrator | 2025-02-10 09:42:54.570858 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-02-10 09:42:54.570866 | orchestrator | Monday 10 February 2025 09:40:18 +0000 (0:00:02.909) 0:04:00.444 ******* 2025-02-10 09:42:54.570875 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-02-10 09:42:54.570882 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:42:54.570890 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-02-10 09:42:54.570898 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:42:54.570906 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-02-10 09:42:54.570914 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:42:54.570922 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-02-10 09:42:54.570930 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:42:54.570938 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-02-10 09:42:54.570946 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:42:54.570953 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-02-10 09:42:54.570961 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:42:54.570969 | orchestrator | 2025-02-10 09:42:54.570977 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-02-10 09:42:54.570985 | orchestrator | Monday 10 February 2025 09:40:21 +0000 (0:00:02.468) 0:04:02.913 ******* 2025-02-10 09:42:54.571013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.571023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.571037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.571045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.571054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.571062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.571081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.571096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.571123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.571132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.571141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.571149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.571164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.571184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.571198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.571206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.571220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.571229 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:42:54.571237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.571257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.571271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.571280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.571294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.571302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.571314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.571339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.571348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.571357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.571365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.571374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.571382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.571407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.571423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.571437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.571463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.571472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.571480 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:42:54.571500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.571515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.571524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.571532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.571540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.571555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.571570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.571589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.571599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.571607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.571616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.571624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.571643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.571662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.571672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.571681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.571689 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:42:54.571704 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.571717 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.571736 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.571746 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.571754 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.571762 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.571771 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.571789 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.571809 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.571818 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.571826 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.571835 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.571843 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.571862 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.571884 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.571894 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.571902 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.571910 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:42:54.571924 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.571938 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.571956 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.571965 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.571974 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.571982 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.571998 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.572006 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.572021 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.572041 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.572050 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.572059 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.572077 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.572091 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.572153 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.572173 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.572182 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.572191 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:42:54.572199 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.572213 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.572222 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.572240 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.572250 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.572258 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.572276 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.572285 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.572300 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.572320 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.572329 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.572338 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.572351 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.572366 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.572374 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.572393 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.572403 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.572411 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:42:54.572419 | orchestrator | 2025-02-10 09:42:54.572428 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-02-10 09:42:54.572436 | orchestrator | Monday 10 February 2025 09:40:23 +0000 (0:00:02.376) 0:04:05.289 ******* 2025-02-10 09:42:54.572450 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.572463 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.572471 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.572490 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.572500 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.572513 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.572521 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.572530 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.572544 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.572564 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.572574 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.572590 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.572599 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.572612 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.572631 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.572640 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.572649 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.572662 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.572671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:42:54.572685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.572697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.572706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:42:54.572719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.572727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.572741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.572751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.572764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.572773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.572785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.572793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.572807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.572816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.572829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.572843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.572852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.572860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.572874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.572884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.572892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.572904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.572917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.572926 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-10 09:42:54.572935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.572950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.572958 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.572972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.572985 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.572994 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.573009 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.573018 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:42:54.573026 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.573043 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.573052 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.573070 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.573079 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.573087 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.573112 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.573126 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.573134 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.573143 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.573157 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.573166 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-10 09:42:54.573178 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.573191 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.573202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:42:54.573214 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.573223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.573232 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.573247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.573256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.573270 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.573279 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.573287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:42:54.573300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.573313 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.573322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.573330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-10 09:42:54.573339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.573347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.573361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.573378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.573387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.573402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.573411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.573419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.573431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.573444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-10 09:42:54.573458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.573467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.573475 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-10 09:42:54.573483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.573499 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.573518 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:42:54.573528 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:42:54.573537 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.573545 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.573568 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.573577 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.573589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-10 09:42:54.573598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:42:54.573606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:42:54.573615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:42:54.573628 | orchestrator | 2025-02-10 09:42:54.573637 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-02-10 09:42:54.573645 | orchestrator | Monday 10 February 2025 09:40:29 +0000 (0:00:06.074) 0:04:11.364 ******* 2025-02-10 09:42:54.573652 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:42:54.573660 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:42:54.573668 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:42:54.573677 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:42:54.573685 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:42:54.573693 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:42:54.573701 | orchestrator | 2025-02-10 09:42:54.573709 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-02-10 09:42:54.573717 | orchestrator | Monday 10 February 2025 09:40:31 +0000 (0:00:01.942) 0:04:13.307 ******* 2025-02-10 09:42:54.573725 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:42:54.573733 | orchestrator | 2025-02-10 09:42:54.573744 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-02-10 09:42:54.573753 | orchestrator | Monday 10 February 2025 09:40:34 +0000 (0:00:02.530) 0:04:15.837 ******* 2025-02-10 09:42:54.573761 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:42:54.573769 | orchestrator | 2025-02-10 09:42:54.573778 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-02-10 09:42:54.573786 | orchestrator | Monday 10 February 2025 09:40:36 +0000 (0:00:01.868) 0:04:17.705 ******* 2025-02-10 09:42:54.573795 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:42:54.573804 | orchestrator | 2025-02-10 09:42:54.573812 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-02-10 09:42:54.573824 | orchestrator | Monday 10 February 2025 09:41:10 +0000 (0:00:34.134) 0:04:51.840 ******* 2025-02-10 09:42:54.573832 | orchestrator | 2025-02-10 09:42:54.573841 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-02-10 09:42:54.573849 | orchestrator | Monday 10 February 2025 09:41:10 +0000 (0:00:00.154) 0:04:51.995 ******* 2025-02-10 09:42:54.573857 | orchestrator | 2025-02-10 09:42:54.573865 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-02-10 09:42:54.573872 | orchestrator | Monday 10 February 2025 09:41:10 +0000 (0:00:00.476) 0:04:52.472 ******* 2025-02-10 09:42:54.573881 | orchestrator | 2025-02-10 09:42:54.573888 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-02-10 09:42:54.573896 | orchestrator | Monday 10 February 2025 09:41:11 +0000 (0:00:00.209) 0:04:52.681 ******* 2025-02-10 09:42:54.573904 | orchestrator | 2025-02-10 09:42:54.573913 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-02-10 09:42:54.573921 | orchestrator | Monday 10 February 2025 09:41:11 +0000 (0:00:00.216) 0:04:52.898 ******* 2025-02-10 09:42:54.573929 | orchestrator | 2025-02-10 09:42:54.573938 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-02-10 09:42:54.573946 | orchestrator | Monday 10 February 2025 09:41:11 +0000 (0:00:00.237) 0:04:53.136 ******* 2025-02-10 09:42:54.573954 | orchestrator | 2025-02-10 09:42:54.573962 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-02-10 09:42:54.573970 | orchestrator | Monday 10 February 2025 09:41:12 +0000 (0:00:00.753) 0:04:53.890 ******* 2025-02-10 09:42:54.573978 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:42:54.573986 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:42:54.573994 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:42:54.574002 | orchestrator | 2025-02-10 09:42:54.574010 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-02-10 09:42:54.574056 | orchestrator | Monday 10 February 2025 09:41:48 +0000 (0:00:36.120) 0:05:30.011 ******* 2025-02-10 09:42:54.574065 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:42:54.574073 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:42:54.574081 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:42:54.574089 | orchestrator | 2025-02-10 09:42:54.574108 | orchestrator | RUNNING HANDLER [neutron : Restart ironic-neutron-agent container] ************* 2025-02-10 09:42:54.574117 | orchestrator | Monday 10 February 2025 09:42:33 +0000 (0:00:45.074) 0:06:15.085 ******* 2025-02-10 09:42:54.574125 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:42:54.574133 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:42:54.574141 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:42:54.574149 | orchestrator | 2025-02-10 09:42:54.574157 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:42:54.574166 | orchestrator | testbed-node-0 : ok=29  changed=18  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-02-10 09:42:54.574176 | orchestrator | testbed-node-1 : ok=19  changed=11  unreachable=0 failed=0 skipped=30  rescued=0 ignored=0 2025-02-10 09:42:54.574184 | orchestrator | testbed-node-2 : ok=19  changed=11  unreachable=0 failed=0 skipped=30  rescued=0 ignored=0 2025-02-10 09:42:54.574192 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-02-10 09:42:54.574201 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-02-10 09:42:54.574209 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-02-10 09:42:54.574217 | orchestrator | 2025-02-10 09:42:54.574225 | orchestrator | 2025-02-10 09:42:54.574234 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:42:54.574242 | orchestrator | Monday 10 February 2025 09:42:53 +0000 (0:00:20.099) 0:06:35.184 ******* 2025-02-10 09:42:54.574250 | orchestrator | =============================================================================== 2025-02-10 09:42:54.574258 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 45.07s 2025-02-10 09:42:54.574266 | orchestrator | neutron : Restart neutron-server container ----------------------------- 36.12s 2025-02-10 09:42:54.574276 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 34.14s 2025-02-10 09:42:54.574290 | orchestrator | neutron : Restart ironic-neutron-agent container ----------------------- 20.10s 2025-02-10 09:42:54.574299 | orchestrator | neutron : Copying over neutron.conf ------------------------------------ 12.86s 2025-02-10 09:42:54.574306 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------ 11.15s 2025-02-10 09:42:54.574320 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------ 10.41s 2025-02-10 09:42:54.574328 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.48s 2025-02-10 09:42:54.574336 | orchestrator | neutron : Copying over dhcp_agent.ini ----------------------------------- 8.15s 2025-02-10 09:42:54.574344 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 8.05s 2025-02-10 09:42:54.574351 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 7.60s 2025-02-10 09:42:54.574359 | orchestrator | neutron : Copying over mlnx_agent.ini ----------------------------------- 7.58s 2025-02-10 09:42:54.574367 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 7.39s 2025-02-10 09:42:54.574375 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 7.32s 2025-02-10 09:42:54.574388 | orchestrator | neutron : Copying over config.json files for services ------------------- 6.52s 2025-02-10 09:42:57.587761 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 6.33s 2025-02-10 09:42:57.587921 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 6.29s 2025-02-10 09:42:57.587942 | orchestrator | neutron : Copying over eswitchd.conf ------------------------------------ 6.25s 2025-02-10 09:42:57.587957 | orchestrator | neutron : Copying over sriov_agent.ini ---------------------------------- 6.24s 2025-02-10 09:42:57.587972 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.14s 2025-02-10 09:42:57.587986 | orchestrator | 2025-02-10 09:42:54 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:42:57.588024 | orchestrator | 2025-02-10 09:42:57 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:43:00.630400 | orchestrator | 2025-02-10 09:42:57 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:43:00.630563 | orchestrator | 2025-02-10 09:42:57 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:43:00.630613 | orchestrator | 2025-02-10 09:42:57 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:43:00.630632 | orchestrator | 2025-02-10 09:42:57 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:00.630669 | orchestrator | 2025-02-10 09:43:00 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:43:00.631148 | orchestrator | 2025-02-10 09:43:00 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:43:00.631176 | orchestrator | 2025-02-10 09:43:00 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:43:00.632504 | orchestrator | 2025-02-10 09:43:00 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:43:00.632641 | orchestrator | 2025-02-10 09:43:00 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:03.660775 | orchestrator | 2025-02-10 09:43:03 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:43:03.661540 | orchestrator | 2025-02-10 09:43:03 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:43:03.661644 | orchestrator | 2025-02-10 09:43:03 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:43:03.661679 | orchestrator | 2025-02-10 09:43:03 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:43:06.700322 | orchestrator | 2025-02-10 09:43:03 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:06.700472 | orchestrator | 2025-02-10 09:43:06 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:43:06.702539 | orchestrator | 2025-02-10 09:43:06 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:43:06.703010 | orchestrator | 2025-02-10 09:43:06 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:43:06.703579 | orchestrator | 2025-02-10 09:43:06 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:43:09.735368 | orchestrator | 2025-02-10 09:43:06 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:09.735486 | orchestrator | 2025-02-10 09:43:09 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:43:09.735732 | orchestrator | 2025-02-10 09:43:09 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:43:09.735745 | orchestrator | 2025-02-10 09:43:09 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:43:09.735755 | orchestrator | 2025-02-10 09:43:09 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:43:12.770963 | orchestrator | 2025-02-10 09:43:09 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:12.771191 | orchestrator | 2025-02-10 09:43:12 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:43:12.771627 | orchestrator | 2025-02-10 09:43:12 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:43:12.771663 | orchestrator | 2025-02-10 09:43:12 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:43:12.772948 | orchestrator | 2025-02-10 09:43:12 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:43:12.775308 | orchestrator | 2025-02-10 09:43:12 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:15.808507 | orchestrator | 2025-02-10 09:43:15 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:43:15.809067 | orchestrator | 2025-02-10 09:43:15 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:43:15.809693 | orchestrator | 2025-02-10 09:43:15 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:43:15.809735 | orchestrator | 2025-02-10 09:43:15 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:43:18.851906 | orchestrator | 2025-02-10 09:43:15 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:18.852089 | orchestrator | 2025-02-10 09:43:18 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:43:18.852830 | orchestrator | 2025-02-10 09:43:18 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:43:18.852867 | orchestrator | 2025-02-10 09:43:18 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:43:18.855058 | orchestrator | 2025-02-10 09:43:18 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:43:21.901608 | orchestrator | 2025-02-10 09:43:18 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:21.901766 | orchestrator | 2025-02-10 09:43:21 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:43:21.902412 | orchestrator | 2025-02-10 09:43:21 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:43:21.902453 | orchestrator | 2025-02-10 09:43:21 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:43:21.903389 | orchestrator | 2025-02-10 09:43:21 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:43:24.943302 | orchestrator | 2025-02-10 09:43:21 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:24.943416 | orchestrator | 2025-02-10 09:43:24 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:43:24.943732 | orchestrator | 2025-02-10 09:43:24 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:43:24.943747 | orchestrator | 2025-02-10 09:43:24 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:43:24.947624 | orchestrator | 2025-02-10 09:43:24 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:43:28.019863 | orchestrator | 2025-02-10 09:43:24 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:28.020060 | orchestrator | 2025-02-10 09:43:28 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:43:28.020598 | orchestrator | 2025-02-10 09:43:28 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:43:28.020676 | orchestrator | 2025-02-10 09:43:28 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:43:28.024132 | orchestrator | 2025-02-10 09:43:28 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:43:31.072581 | orchestrator | 2025-02-10 09:43:28 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:31.072707 | orchestrator | 2025-02-10 09:43:31 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:43:31.075394 | orchestrator | 2025-02-10 09:43:31 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:43:31.075501 | orchestrator | 2025-02-10 09:43:31 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:43:31.075955 | orchestrator | 2025-02-10 09:43:31 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:43:34.125604 | orchestrator | 2025-02-10 09:43:31 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:34.125789 | orchestrator | 2025-02-10 09:43:34 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:43:34.125888 | orchestrator | 2025-02-10 09:43:34 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:43:34.126974 | orchestrator | 2025-02-10 09:43:34 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:43:34.128532 | orchestrator | 2025-02-10 09:43:34 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:43:37.165295 | orchestrator | 2025-02-10 09:43:34 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:37.165446 | orchestrator | 2025-02-10 09:43:37 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:43:37.166504 | orchestrator | 2025-02-10 09:43:37 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:43:37.167642 | orchestrator | 2025-02-10 09:43:37 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:43:37.175666 | orchestrator | 2025-02-10 09:43:37 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:43:40.235505 | orchestrator | 2025-02-10 09:43:37 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:40.235676 | orchestrator | 2025-02-10 09:43:40 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:43:40.237525 | orchestrator | 2025-02-10 09:43:40 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:43:40.237577 | orchestrator | 2025-02-10 09:43:40 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:43:40.237605 | orchestrator | 2025-02-10 09:43:40 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:43:40.237934 | orchestrator | 2025-02-10 09:43:40 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:43.296986 | orchestrator | 2025-02-10 09:43:43 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:43:43.297962 | orchestrator | 2025-02-10 09:43:43 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:43:43.300954 | orchestrator | 2025-02-10 09:43:43 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:43:43.302214 | orchestrator | 2025-02-10 09:43:43 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:43:46.388502 | orchestrator | 2025-02-10 09:43:43 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:46.388623 | orchestrator | 2025-02-10 09:43:46 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:43:46.390361 | orchestrator | 2025-02-10 09:43:46 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:43:46.392687 | orchestrator | 2025-02-10 09:43:46 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:43:46.394827 | orchestrator | 2025-02-10 09:43:46 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:43:46.396151 | orchestrator | 2025-02-10 09:43:46 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:49.442078 | orchestrator | 2025-02-10 09:43:49 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:43:49.444226 | orchestrator | 2025-02-10 09:43:49 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:43:49.444257 | orchestrator | 2025-02-10 09:43:49 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:43:52.489201 | orchestrator | 2025-02-10 09:43:49 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:43:52.489347 | orchestrator | 2025-02-10 09:43:49 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:52.489386 | orchestrator | 2025-02-10 09:43:52 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:43:52.490194 | orchestrator | 2025-02-10 09:43:52 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:43:52.490235 | orchestrator | 2025-02-10 09:43:52 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:43:55.538507 | orchestrator | 2025-02-10 09:43:52 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:43:55.538646 | orchestrator | 2025-02-10 09:43:52 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:55.538686 | orchestrator | 2025-02-10 09:43:55 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:43:55.539205 | orchestrator | 2025-02-10 09:43:55 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:43:55.539241 | orchestrator | 2025-02-10 09:43:55 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:43:55.540073 | orchestrator | 2025-02-10 09:43:55 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:43:55.540213 | orchestrator | 2025-02-10 09:43:55 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:58.593201 | orchestrator | 2025-02-10 09:43:58 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:43:58.594588 | orchestrator | 2025-02-10 09:43:58 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:43:58.594652 | orchestrator | 2025-02-10 09:43:58 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:43:58.595524 | orchestrator | 2025-02-10 09:43:58 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:44:01.629873 | orchestrator | 2025-02-10 09:43:58 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:01.630160 | orchestrator | 2025-02-10 09:44:01 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:44:01.630869 | orchestrator | 2025-02-10 09:44:01 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:44:01.631845 | orchestrator | 2025-02-10 09:44:01 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:44:01.631911 | orchestrator | 2025-02-10 09:44:01 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:44:01.632719 | orchestrator | 2025-02-10 09:44:01 | INFO  | Task 2f0f49e4-56e7-49b8-998f-1625f4357d45 is in state STARTED 2025-02-10 09:44:04.670912 | orchestrator | 2025-02-10 09:44:01 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:04.671146 | orchestrator | 2025-02-10 09:44:04 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:44:04.672107 | orchestrator | 2025-02-10 09:44:04 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:44:04.672205 | orchestrator | 2025-02-10 09:44:04 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:44:04.673408 | orchestrator | 2025-02-10 09:44:04 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:44:04.676268 | orchestrator | 2025-02-10 09:44:04 | INFO  | Task 2f0f49e4-56e7-49b8-998f-1625f4357d45 is in state STARTED 2025-02-10 09:44:07.726977 | orchestrator | 2025-02-10 09:44:04 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:07.727208 | orchestrator | 2025-02-10 09:44:07 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:44:07.728202 | orchestrator | 2025-02-10 09:44:07 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:44:07.728294 | orchestrator | 2025-02-10 09:44:07 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:44:07.730408 | orchestrator | 2025-02-10 09:44:07 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:44:07.733157 | orchestrator | 2025-02-10 09:44:07 | INFO  | Task 2f0f49e4-56e7-49b8-998f-1625f4357d45 is in state STARTED 2025-02-10 09:44:10.777520 | orchestrator | 2025-02-10 09:44:07 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:10.777776 | orchestrator | 2025-02-10 09:44:10 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:44:10.777873 | orchestrator | 2025-02-10 09:44:10 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:44:10.777895 | orchestrator | 2025-02-10 09:44:10 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:44:10.778191 | orchestrator | 2025-02-10 09:44:10 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:44:10.778724 | orchestrator | 2025-02-10 09:44:10 | INFO  | Task 2f0f49e4-56e7-49b8-998f-1625f4357d45 is in state STARTED 2025-02-10 09:44:10.780446 | orchestrator | 2025-02-10 09:44:10 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:13.822428 | orchestrator | 2025-02-10 09:44:13 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:44:13.822763 | orchestrator | 2025-02-10 09:44:13 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:44:13.822804 | orchestrator | 2025-02-10 09:44:13 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:44:13.822834 | orchestrator | 2025-02-10 09:44:13 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:44:13.823372 | orchestrator | 2025-02-10 09:44:13 | INFO  | Task 2f0f49e4-56e7-49b8-998f-1625f4357d45 is in state STARTED 2025-02-10 09:44:16.906363 | orchestrator | 2025-02-10 09:44:13 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:16.906607 | orchestrator | 2025-02-10 09:44:16 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:44:16.907452 | orchestrator | 2025-02-10 09:44:16 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:44:16.907546 | orchestrator | 2025-02-10 09:44:16 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:44:19.985709 | orchestrator | 2025-02-10 09:44:16 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:44:19.985894 | orchestrator | 2025-02-10 09:44:16 | INFO  | Task 2f0f49e4-56e7-49b8-998f-1625f4357d45 is in state SUCCESS 2025-02-10 09:44:19.985916 | orchestrator | 2025-02-10 09:44:16 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:19.985953 | orchestrator | 2025-02-10 09:44:19 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:44:19.986101 | orchestrator | 2025-02-10 09:44:19 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:44:19.986165 | orchestrator | 2025-02-10 09:44:19 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:44:19.987432 | orchestrator | 2025-02-10 09:44:19 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:44:23.039543 | orchestrator | 2025-02-10 09:44:19 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:23.039751 | orchestrator | 2025-02-10 09:44:23 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:44:23.040366 | orchestrator | 2025-02-10 09:44:23 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:44:23.040420 | orchestrator | 2025-02-10 09:44:23 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:44:23.041506 | orchestrator | 2025-02-10 09:44:23 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:44:26.081895 | orchestrator | 2025-02-10 09:44:23 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:26.082112 | orchestrator | 2025-02-10 09:44:26 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:44:26.083662 | orchestrator | 2025-02-10 09:44:26 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:44:26.083703 | orchestrator | 2025-02-10 09:44:26 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:44:26.084566 | orchestrator | 2025-02-10 09:44:26 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:44:29.131326 | orchestrator | 2025-02-10 09:44:26 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:29.131479 | orchestrator | 2025-02-10 09:44:29 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:44:29.131660 | orchestrator | 2025-02-10 09:44:29 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:44:29.131685 | orchestrator | 2025-02-10 09:44:29 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:44:29.131707 | orchestrator | 2025-02-10 09:44:29 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:44:32.178392 | orchestrator | 2025-02-10 09:44:29 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:32.178574 | orchestrator | 2025-02-10 09:44:32 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:44:32.178821 | orchestrator | 2025-02-10 09:44:32 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:44:32.178850 | orchestrator | 2025-02-10 09:44:32 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:44:32.178870 | orchestrator | 2025-02-10 09:44:32 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:44:35.234699 | orchestrator | 2025-02-10 09:44:32 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:35.234862 | orchestrator | 2025-02-10 09:44:35 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:44:35.237346 | orchestrator | 2025-02-10 09:44:35 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:44:35.238211 | orchestrator | 2025-02-10 09:44:35 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:44:35.238393 | orchestrator | 2025-02-10 09:44:35 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:44:38.282643 | orchestrator | 2025-02-10 09:44:35 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:38.282831 | orchestrator | 2025-02-10 09:44:38 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:44:41.494441 | orchestrator | 2025-02-10 09:44:38 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:44:41.494588 | orchestrator | 2025-02-10 09:44:38 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:44:41.494609 | orchestrator | 2025-02-10 09:44:38 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:44:41.494627 | orchestrator | 2025-02-10 09:44:38 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:41.494663 | orchestrator | 2025-02-10 09:44:41 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:44:44.426684 | orchestrator | 2025-02-10 09:44:41 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:44:44.426826 | orchestrator | 2025-02-10 09:44:41 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:44:44.426847 | orchestrator | 2025-02-10 09:44:41 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:44:44.426863 | orchestrator | 2025-02-10 09:44:41 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:44.426898 | orchestrator | 2025-02-10 09:44:44 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:44:44.428296 | orchestrator | 2025-02-10 09:44:44 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:44:44.429601 | orchestrator | 2025-02-10 09:44:44 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:44:44.430679 | orchestrator | 2025-02-10 09:44:44 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:44:47.470928 | orchestrator | 2025-02-10 09:44:44 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:47.471092 | orchestrator | 2025-02-10 09:44:47 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:44:47.471281 | orchestrator | 2025-02-10 09:44:47 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:44:47.471310 | orchestrator | 2025-02-10 09:44:47 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:44:47.472046 | orchestrator | 2025-02-10 09:44:47 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:44:47.474368 | orchestrator | 2025-02-10 09:44:47 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:50.529338 | orchestrator | 2025-02-10 09:44:50 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:44:50.538835 | orchestrator | 2025-02-10 09:44:50 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:44:50.539342 | orchestrator | 2025-02-10 09:44:50 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:44:50.541049 | orchestrator | 2025-02-10 09:44:50 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:44:53.571230 | orchestrator | 2025-02-10 09:44:50 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:53.571356 | orchestrator | 2025-02-10 09:44:53 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:44:53.571887 | orchestrator | 2025-02-10 09:44:53 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:44:53.571915 | orchestrator | 2025-02-10 09:44:53 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:44:53.573383 | orchestrator | 2025-02-10 09:44:53 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:44:53.573502 | orchestrator | 2025-02-10 09:44:53 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:56.607487 | orchestrator | 2025-02-10 09:44:56 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:44:56.607757 | orchestrator | 2025-02-10 09:44:56 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:44:56.608657 | orchestrator | 2025-02-10 09:44:56 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:44:56.609660 | orchestrator | 2025-02-10 09:44:56 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:44:59.645208 | orchestrator | 2025-02-10 09:44:56 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:59.645322 | orchestrator | 2025-02-10 09:44:59 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:44:59.645625 | orchestrator | 2025-02-10 09:44:59 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:44:59.645642 | orchestrator | 2025-02-10 09:44:59 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:44:59.646382 | orchestrator | 2025-02-10 09:44:59 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:44:59.646416 | orchestrator | 2025-02-10 09:44:59 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:45:02.680711 | orchestrator | 2025-02-10 09:45:02 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:45:02.681009 | orchestrator | 2025-02-10 09:45:02 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:45:02.682561 | orchestrator | 2025-02-10 09:45:02 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:45:02.683859 | orchestrator | 2025-02-10 09:45:02 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:45:05.734340 | orchestrator | 2025-02-10 09:45:02 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:45:05.734459 | orchestrator | 2025-02-10 09:45:05 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:45:05.735559 | orchestrator | 2025-02-10 09:45:05 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:45:05.737700 | orchestrator | 2025-02-10 09:45:05 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:45:05.740350 | orchestrator | 2025-02-10 09:45:05 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:45:08.788791 | orchestrator | 2025-02-10 09:45:05 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:45:08.788938 | orchestrator | 2025-02-10 09:45:08 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:45:08.789860 | orchestrator | 2025-02-10 09:45:08 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:45:08.793753 | orchestrator | 2025-02-10 09:45:08 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:45:08.795065 | orchestrator | 2025-02-10 09:45:08 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:45:08.795276 | orchestrator | 2025-02-10 09:45:08 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:45:11.839494 | orchestrator | 2025-02-10 09:45:11 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:45:11.840366 | orchestrator | 2025-02-10 09:45:11 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:45:11.840511 | orchestrator | 2025-02-10 09:45:11 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:45:11.840948 | orchestrator | 2025-02-10 09:45:11 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:45:11.841044 | orchestrator | 2025-02-10 09:45:11 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:45:14.878979 | orchestrator | 2025-02-10 09:45:14 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:45:14.879217 | orchestrator | 2025-02-10 09:45:14 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:45:14.881001 | orchestrator | 2025-02-10 09:45:14 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:45:14.881688 | orchestrator | 2025-02-10 09:45:14 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:45:17.936015 | orchestrator | 2025-02-10 09:45:14 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:45:17.936179 | orchestrator | 2025-02-10 09:45:17 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:45:17.936542 | orchestrator | 2025-02-10 09:45:17 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:45:17.936560 | orchestrator | 2025-02-10 09:45:17 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:45:17.937366 | orchestrator | 2025-02-10 09:45:17 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:45:20.973395 | orchestrator | 2025-02-10 09:45:17 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:45:20.973600 | orchestrator | 2025-02-10 09:45:20 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:45:20.973691 | orchestrator | 2025-02-10 09:45:20 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:45:20.973711 | orchestrator | 2025-02-10 09:45:20 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:45:20.973730 | orchestrator | 2025-02-10 09:45:20 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:45:24.028973 | orchestrator | 2025-02-10 09:45:20 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:45:24.029181 | orchestrator | 2025-02-10 09:45:24 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:45:24.034079 | orchestrator | 2025-02-10 09:45:24 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:45:24.034178 | orchestrator | 2025-02-10 09:45:24 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:45:24.034196 | orchestrator | 2025-02-10 09:45:24 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:45:24.034224 | orchestrator | 2025-02-10 09:45:24 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:45:27.085520 | orchestrator | 2025-02-10 09:45:27 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:45:27.085724 | orchestrator | 2025-02-10 09:45:27 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:45:27.087788 | orchestrator | 2025-02-10 09:45:27 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:45:27.089502 | orchestrator | 2025-02-10 09:45:27 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:45:27.089544 | orchestrator | 2025-02-10 09:45:27 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:45:30.138430 | orchestrator | 2025-02-10 09:45:30 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:45:30.139114 | orchestrator | 2025-02-10 09:45:30 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:45:30.139218 | orchestrator | 2025-02-10 09:45:30 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:45:30.139909 | orchestrator | 2025-02-10 09:45:30 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:45:33.183223 | orchestrator | 2025-02-10 09:45:30 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:45:33.183385 | orchestrator | 2025-02-10 09:45:33 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:45:33.183481 | orchestrator | 2025-02-10 09:45:33 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:45:33.185319 | orchestrator | 2025-02-10 09:45:33 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:45:33.187524 | orchestrator | 2025-02-10 09:45:33 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:45:36.257330 | orchestrator | 2025-02-10 09:45:33 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:45:36.257522 | orchestrator | 2025-02-10 09:45:36 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:45:36.257993 | orchestrator | 2025-02-10 09:45:36 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:45:36.259507 | orchestrator | 2025-02-10 09:45:36 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:45:36.260976 | orchestrator | 2025-02-10 09:45:36 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:45:36.262568 | orchestrator | 2025-02-10 09:45:36 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:45:39.308644 | orchestrator | 2025-02-10 09:45:39 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:45:39.309802 | orchestrator | 2025-02-10 09:45:39 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:45:39.309851 | orchestrator | 2025-02-10 09:45:39 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:45:42.350643 | orchestrator | 2025-02-10 09:45:39 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:45:42.350830 | orchestrator | 2025-02-10 09:45:39 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:45:42.350872 | orchestrator | 2025-02-10 09:45:42 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:45:42.350960 | orchestrator | 2025-02-10 09:45:42 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:45:42.354905 | orchestrator | 2025-02-10 09:45:42 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:45:45.398901 | orchestrator | 2025-02-10 09:45:42 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:45:45.399055 | orchestrator | 2025-02-10 09:45:42 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:45:45.399098 | orchestrator | 2025-02-10 09:45:45 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:45:45.399200 | orchestrator | 2025-02-10 09:45:45 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:45:45.399312 | orchestrator | 2025-02-10 09:45:45 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:45:45.400724 | orchestrator | 2025-02-10 09:45:45 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:45:48.451771 | orchestrator | 2025-02-10 09:45:45 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:45:48.452323 | orchestrator | 2025-02-10 09:45:48 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:45:48.453812 | orchestrator | 2025-02-10 09:45:48 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:45:48.454677 | orchestrator | 2025-02-10 09:45:48 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:45:48.456548 | orchestrator | 2025-02-10 09:45:48 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:45:51.519907 | orchestrator | 2025-02-10 09:45:48 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:45:51.520087 | orchestrator | 2025-02-10 09:45:51 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:45:54.604039 | orchestrator | 2025-02-10 09:45:51 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:45:54.604217 | orchestrator | 2025-02-10 09:45:51 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:45:54.604250 | orchestrator | 2025-02-10 09:45:51 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:45:54.604274 | orchestrator | 2025-02-10 09:45:51 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:45:54.604356 | orchestrator | 2025-02-10 09:45:54 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:45:54.605817 | orchestrator | 2025-02-10 09:45:54 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:45:54.622312 | orchestrator | 2025-02-10 09:45:54 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:45:54.626277 | orchestrator | 2025-02-10 09:45:54 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:45:57.672607 | orchestrator | 2025-02-10 09:45:54 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:45:57.672773 | orchestrator | 2025-02-10 09:45:57 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:45:57.673348 | orchestrator | 2025-02-10 09:45:57 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:45:57.673386 | orchestrator | 2025-02-10 09:45:57 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:45:57.675713 | orchestrator | 2025-02-10 09:45:57 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:46:00.723791 | orchestrator | 2025-02-10 09:45:57 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:00.723929 | orchestrator | 2025-02-10 09:46:00 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:46:00.723986 | orchestrator | 2025-02-10 09:46:00 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:46:00.725351 | orchestrator | 2025-02-10 09:46:00 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:46:00.726872 | orchestrator | 2025-02-10 09:46:00 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:46:00.727320 | orchestrator | 2025-02-10 09:46:00 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:03.765268 | orchestrator | 2025-02-10 09:46:03 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:46:03.765481 | orchestrator | 2025-02-10 09:46:03 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:46:03.765919 | orchestrator | 2025-02-10 09:46:03 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:46:03.766693 | orchestrator | 2025-02-10 09:46:03 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:46:06.808985 | orchestrator | 2025-02-10 09:46:03 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:06.809187 | orchestrator | 2025-02-10 09:46:06 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:46:06.809553 | orchestrator | 2025-02-10 09:46:06 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:46:06.810526 | orchestrator | 2025-02-10 09:46:06 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:46:06.811895 | orchestrator | 2025-02-10 09:46:06 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:46:09.852515 | orchestrator | 2025-02-10 09:46:06 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:09.852741 | orchestrator | 2025-02-10 09:46:09 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:46:09.852830 | orchestrator | 2025-02-10 09:46:09 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:46:09.852853 | orchestrator | 2025-02-10 09:46:09 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:46:09.856440 | orchestrator | 2025-02-10 09:46:09 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:46:12.889793 | orchestrator | 2025-02-10 09:46:09 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:12.889955 | orchestrator | 2025-02-10 09:46:12 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:46:12.891341 | orchestrator | 2025-02-10 09:46:12 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:46:12.893377 | orchestrator | 2025-02-10 09:46:12 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:46:12.895423 | orchestrator | 2025-02-10 09:46:12 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:46:12.895539 | orchestrator | 2025-02-10 09:46:12 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:15.940691 | orchestrator | 2025-02-10 09:46:15 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:46:15.944116 | orchestrator | 2025-02-10 09:46:15 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:46:15.946593 | orchestrator | 2025-02-10 09:46:15 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:46:15.948391 | orchestrator | 2025-02-10 09:46:15 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:46:19.001762 | orchestrator | 2025-02-10 09:46:15 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:19.002140 | orchestrator | 2025-02-10 09:46:18 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:46:19.003142 | orchestrator | 2025-02-10 09:46:19 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:46:19.006477 | orchestrator | 2025-02-10 09:46:19 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:46:19.010266 | orchestrator | 2025-02-10 09:46:19 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:46:22.087078 | orchestrator | 2025-02-10 09:46:19 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:22.087307 | orchestrator | 2025-02-10 09:46:22 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:46:22.089252 | orchestrator | 2025-02-10 09:46:22 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:46:22.090620 | orchestrator | 2025-02-10 09:46:22 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:46:22.092911 | orchestrator | 2025-02-10 09:46:22 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:46:25.139265 | orchestrator | 2025-02-10 09:46:22 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:25.139424 | orchestrator | 2025-02-10 09:46:25 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:46:25.142228 | orchestrator | 2025-02-10 09:46:25 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:46:28.202963 | orchestrator | 2025-02-10 09:46:25 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:46:28.203118 | orchestrator | 2025-02-10 09:46:25 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:46:28.203181 | orchestrator | 2025-02-10 09:46:25 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:28.203233 | orchestrator | 2025-02-10 09:46:28 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:46:28.203591 | orchestrator | 2025-02-10 09:46:28 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:46:28.205500 | orchestrator | 2025-02-10 09:46:28 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state STARTED 2025-02-10 09:46:28.206461 | orchestrator | 2025-02-10 09:46:28 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:46:28.207097 | orchestrator | 2025-02-10 09:46:28 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:31.255625 | orchestrator | 2025-02-10 09:46:31 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:46:31.257341 | orchestrator | 2025-02-10 09:46:31 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state STARTED 2025-02-10 09:46:31.262647 | orchestrator | 2025-02-10 09:46:31.262754 | orchestrator | None 2025-02-10 09:46:31.262773 | orchestrator | 2025-02-10 09:46:31.262789 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:46:31.262875 | orchestrator | 2025-02-10 09:46:31.262890 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:46:31.262904 | orchestrator | Monday 10 February 2025 09:42:47 +0000 (0:00:00.370) 0:00:00.370 ******* 2025-02-10 09:46:31.262919 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:46:31.262934 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:46:31.262948 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:46:31.262962 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:46:31.262976 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:46:31.263086 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:46:31.263100 | orchestrator | 2025-02-10 09:46:31.263143 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:46:31.263207 | orchestrator | Monday 10 February 2025 09:42:47 +0000 (0:00:00.752) 0:00:01.122 ******* 2025-02-10 09:46:31.263225 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-02-10 09:46:31.263241 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-02-10 09:46:31.263257 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-02-10 09:46:31.263274 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-02-10 09:46:31.263289 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-02-10 09:46:31.263305 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-02-10 09:46:31.263320 | orchestrator | 2025-02-10 09:46:31.263336 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-02-10 09:46:31.263351 | orchestrator | 2025-02-10 09:46:31.263368 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-02-10 09:46:31.263383 | orchestrator | Monday 10 February 2025 09:42:48 +0000 (0:00:01.001) 0:00:02.124 ******* 2025-02-10 09:46:31.263399 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:46:31.263416 | orchestrator | 2025-02-10 09:46:31.263432 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-02-10 09:46:31.263448 | orchestrator | Monday 10 February 2025 09:42:50 +0000 (0:00:01.536) 0:00:03.660 ******* 2025-02-10 09:46:31.263464 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-02-10 09:46:31.263481 | orchestrator | 2025-02-10 09:46:31.263555 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-02-10 09:46:31.263571 | orchestrator | Monday 10 February 2025 09:42:54 +0000 (0:00:03.808) 0:00:07.469 ******* 2025-02-10 09:46:31.263609 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-02-10 09:46:31.263624 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-02-10 09:46:31.263638 | orchestrator | 2025-02-10 09:46:31.263652 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-02-10 09:46:31.263666 | orchestrator | Monday 10 February 2025 09:43:00 +0000 (0:00:05.961) 0:00:13.431 ******* 2025-02-10 09:46:31.263680 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-02-10 09:46:31.263695 | orchestrator | 2025-02-10 09:46:31.263709 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-02-10 09:46:31.263723 | orchestrator | Monday 10 February 2025 09:43:03 +0000 (0:00:03.193) 0:00:16.624 ******* 2025-02-10 09:46:31.263737 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-02-10 09:46:31.263751 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-02-10 09:46:31.263765 | orchestrator | 2025-02-10 09:46:31.263779 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-02-10 09:46:31.263792 | orchestrator | Monday 10 February 2025 09:43:07 +0000 (0:00:03.728) 0:00:20.352 ******* 2025-02-10 09:46:31.263806 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-02-10 09:46:31.263821 | orchestrator | 2025-02-10 09:46:31.263850 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-02-10 09:46:31.263864 | orchestrator | Monday 10 February 2025 09:43:10 +0000 (0:00:03.588) 0:00:23.941 ******* 2025-02-10 09:46:31.263878 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-02-10 09:46:31.263893 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-02-10 09:46:31.263906 | orchestrator | 2025-02-10 09:46:31.263920 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-02-10 09:46:31.263934 | orchestrator | Monday 10 February 2025 09:43:20 +0000 (0:00:09.214) 0:00:33.155 ******* 2025-02-10 09:46:31.264030 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:46:31.264060 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.264077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-10 09:46:31.264092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-10 09:46:31.264107 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.264138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-10 09:46:31.264194 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:46:31.264211 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.264225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.264240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.264255 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.264286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.264304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.264319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.264334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.264349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.264378 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:46:31.264394 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.264408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.264423 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.264438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.264464 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.264480 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.264495 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.264509 | orchestrator | 2025-02-10 09:46:31.264523 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-02-10 09:46:31.264538 | orchestrator | Monday 10 February 2025 09:43:23 +0000 (0:00:03.882) 0:00:37.038 ******* 2025-02-10 09:46:31.264552 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:46:31.264566 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:46:31.264580 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:46:31.264595 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:46:31.264609 | orchestrator | 2025-02-10 09:46:31.264623 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-02-10 09:46:31.264637 | orchestrator | Monday 10 February 2025 09:43:26 +0000 (0:00:02.100) 0:00:39.138 ******* 2025-02-10 09:46:31.264651 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-02-10 09:46:31.264665 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-02-10 09:46:31.264679 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-02-10 09:46:31.264700 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-02-10 09:46:31.264714 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-02-10 09:46:31.264728 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-02-10 09:46:31.264742 | orchestrator | 2025-02-10 09:46:31.264756 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-02-10 09:46:31.264770 | orchestrator | Monday 10 February 2025 09:43:31 +0000 (0:00:05.482) 0:00:44.620 ******* 2025-02-10 09:46:31.264785 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-02-10 09:46:31.264808 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-02-10 09:46:31.264824 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-02-10 09:46:31.264838 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-02-10 09:46:31.264859 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-02-10 09:46:31.264874 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-02-10 09:46:31.264897 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-02-10 09:46:31.265604 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-02-10 09:46:31.265674 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-02-10 09:46:31.265703 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-02-10 09:46:31.265720 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-02-10 09:46:31.265761 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-02-10 09:46:31.265807 | orchestrator | 2025-02-10 09:46:31.265824 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-02-10 09:46:31.265839 | orchestrator | Monday 10 February 2025 09:43:37 +0000 (0:00:05.812) 0:00:50.432 ******* 2025-02-10 09:46:31.265852 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-02-10 09:46:31.265868 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-02-10 09:46:31.265882 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-02-10 09:46:31.265895 | orchestrator | 2025-02-10 09:46:31.265909 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-02-10 09:46:31.265923 | orchestrator | Monday 10 February 2025 09:43:40 +0000 (0:00:03.024) 0:00:53.456 ******* 2025-02-10 09:46:31.265937 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-02-10 09:46:31.265952 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-02-10 09:46:31.265966 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-02-10 09:46:31.265979 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-02-10 09:46:31.265993 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-02-10 09:46:31.266070 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-02-10 09:46:31.266089 | orchestrator | 2025-02-10 09:46:31.266103 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-02-10 09:46:31.266117 | orchestrator | Monday 10 February 2025 09:43:45 +0000 (0:00:05.580) 0:00:59.037 ******* 2025-02-10 09:46:31.266131 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-02-10 09:46:31.266145 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-02-10 09:46:31.266209 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-02-10 09:46:31.266225 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-02-10 09:46:31.266242 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-02-10 09:46:31.266258 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-02-10 09:46:31.266274 | orchestrator | 2025-02-10 09:46:31.266290 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-02-10 09:46:31.266306 | orchestrator | Monday 10 February 2025 09:43:47 +0000 (0:00:01.802) 0:01:00.840 ******* 2025-02-10 09:46:31.266322 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:46:31.266338 | orchestrator | 2025-02-10 09:46:31.266357 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-02-10 09:46:31.266373 | orchestrator | Monday 10 February 2025 09:43:47 +0000 (0:00:00.260) 0:01:01.100 ******* 2025-02-10 09:46:31.266389 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:46:31.266404 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:46:31.266421 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:46:31.266436 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:46:31.266452 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:46:31.266467 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:46:31.266491 | orchestrator | 2025-02-10 09:46:31.266507 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-02-10 09:46:31.266523 | orchestrator | Monday 10 February 2025 09:43:49 +0000 (0:00:01.441) 0:01:02.542 ******* 2025-02-10 09:46:31.266539 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:46:31.266554 | orchestrator | 2025-02-10 09:46:31.266569 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-02-10 09:46:31.266582 | orchestrator | Monday 10 February 2025 09:43:51 +0000 (0:00:02.141) 0:01:04.683 ******* 2025-02-10 09:46:31.266598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-10 09:46:31.266636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-10 09:46:31.266661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.266676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-10 09:46:31.266691 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.266706 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.266730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.266753 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.266768 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.266783 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.266797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.266817 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.266838 | orchestrator | 2025-02-10 09:46:31.266853 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-02-10 09:46:31.266867 | orchestrator | Monday 10 February 2025 09:43:56 +0000 (0:00:04.974) 0:01:09.658 ******* 2025-02-10 09:46:31.266882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:46:31.266896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:46:31.266911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.266925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.266946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:46:31.266968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.266983 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:46:31.266997 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:46:31.267011 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:46:31.267025 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.267040 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.267054 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:46:31.267068 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.267083 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.267110 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:46:31.267127 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.267141 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.267232 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:46:31.267257 | orchestrator | 2025-02-10 09:46:31.267272 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-02-10 09:46:31.267286 | orchestrator | Monday 10 February 2025 09:43:59 +0000 (0:00:03.176) 0:01:12.834 ******* 2025-02-10 09:46:31.267300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:46:31.267315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.267329 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:46:31.267366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:46:31.267381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.267394 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:46:31.267407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:46:31.267420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.267433 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:46:31.267446 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.267468 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.267481 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:46:31.267500 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.267514 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.267527 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:46:31.267540 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.267552 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.267573 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:46:31.267585 | orchestrator | 2025-02-10 09:46:31.267598 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-02-10 09:46:31.267610 | orchestrator | Monday 10 February 2025 09:44:04 +0000 (0:00:05.043) 0:01:17.877 ******* 2025-02-10 09:46:31.267630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-10 09:46:31.267644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-10 09:46:31.267657 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:46:31.267670 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.267684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.267707 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:46:31.267721 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.267734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.267746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.267759 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:46:31.267778 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.267798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.267817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.267840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.267862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-10 09:46:31.267887 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.267908 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.267922 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.267935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.267948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.267973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.267992 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.268006 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.268019 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.268031 | orchestrator | 2025-02-10 09:46:31.268044 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-02-10 09:46:31.268057 | orchestrator | Monday 10 February 2025 09:44:11 +0000 (0:00:06.556) 0:01:24.434 ******* 2025-02-10 09:46:31.268069 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-02-10 09:46:31.268082 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:46:31.268101 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-02-10 09:46:31.268114 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-02-10 09:46:31.268126 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:46:31.268138 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-02-10 09:46:31.268175 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:46:31.268189 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-02-10 09:46:31.268207 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-02-10 09:46:31.268220 | orchestrator | 2025-02-10 09:46:31.268233 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-02-10 09:46:31.268245 | orchestrator | Monday 10 February 2025 09:44:16 +0000 (0:00:05.372) 0:01:29.806 ******* 2025-02-10 09:46:31.268257 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:46:31.268277 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.268291 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:46:31.268304 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.268324 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:46:31.268337 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.268355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-10 09:46:31.268369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-10 09:46:31.268382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-10 09:46:31.268400 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.268414 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.268433 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.268446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.268459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.268479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.268492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.268505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.268523 | orchestrator | skipping: [testbed-node-1] => (i2025-02-10 09:46:31 | INFO  | Task 8ab5b2f9-c40d-49f1-bb5e-9ab0a92e08ac is in state SUCCESS 2025-02-10 09:46:31.268632 | orchestrator | tem={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.268647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.268667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.268681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.268694 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.268713 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.268727 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.268745 | orchestrator | 2025-02-10 09:46:31.268758 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-02-10 09:46:31.268770 | orchestrator | Monday 10 February 2025 09:44:36 +0000 (0:00:20.066) 0:01:49.873 ******* 2025-02-10 09:46:31.268782 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:46:31.268795 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:46:31.268807 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:46:31.268819 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:46:31.268831 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:46:31.268843 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:46:31.268856 | orchestrator | 2025-02-10 09:46:31.268868 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-02-10 09:46:31.268880 | orchestrator | Monday 10 February 2025 09:44:46 +0000 (0:00:09.498) 0:01:59.371 ******* 2025-02-10 09:46:31.268893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:46:31.268906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.268919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.268939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.268958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:46:31.268971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.268984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.268997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.269016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:46:31.269035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.269048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.269061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.269074 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:46:31.269086 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:46:31.269098 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:46:31.269111 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:46:31.269130 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.269170 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.269185 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.269198 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:46:31.269210 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:46:31.269223 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.269241 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.269262 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.269275 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:46:31.269288 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:46:31.269300 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.269314 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.269331 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.269350 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:46:31.269363 | orchestrator | 2025-02-10 09:46:31.269375 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-02-10 09:46:31.269387 | orchestrator | Monday 10 February 2025 09:44:49 +0000 (0:00:03.185) 0:02:02.557 ******* 2025-02-10 09:46:31.269400 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:46:31.269413 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:46:31.269425 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:46:31.269437 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:46:31.269449 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:46:31.269461 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:46:31.269473 | orchestrator | 2025-02-10 09:46:31.269486 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-02-10 09:46:31.269498 | orchestrator | Monday 10 February 2025 09:44:50 +0000 (0:00:00.818) 0:02:03.375 ******* 2025-02-10 09:46:31.269510 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:46:31.269524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-10 09:46:31.269537 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.269549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-10 09:46:31.269574 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:46:31.269587 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.269600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-10 09:46:31.269613 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:46:31.269632 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.269650 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.269663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.269676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.269689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.269702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.269729 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.269742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.269755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.269768 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.269780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.269804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.269818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:46:31.269831 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.269843 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.269856 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-02-10 09:46:31.269878 | orchestrator | 2025-02-10 09:46:31.269890 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-02-10 09:46:31.269903 | orchestrator | Monday 10 February 2025 09:44:53 +0000 (0:00:03.510) 0:02:06.886 ******* 2025-02-10 09:46:31.269915 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:46:31.269928 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:46:31.269940 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:46:31.269952 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:46:31.269964 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:46:31.269977 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:46:31.269989 | orchestrator | 2025-02-10 09:46:31.270002 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-02-10 09:46:31.270041 | orchestrator | Monday 10 February 2025 09:44:54 +0000 (0:00:00.764) 0:02:07.650 ******* 2025-02-10 09:46:31.270057 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:46:31.270069 | orchestrator | 2025-02-10 09:46:31.270081 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-02-10 09:46:31.270094 | orchestrator | Monday 10 February 2025 09:44:56 +0000 (0:00:02.247) 0:02:09.897 ******* 2025-02-10 09:46:31.270106 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:46:31.270119 | orchestrator | 2025-02-10 09:46:31.270131 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-02-10 09:46:31.270143 | orchestrator | Monday 10 February 2025 09:44:59 +0000 (0:00:02.447) 0:02:12.344 ******* 2025-02-10 09:46:31.270181 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:46:31.270202 | orchestrator | 2025-02-10 09:46:31.270222 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-02-10 09:46:31.270252 | orchestrator | Monday 10 February 2025 09:45:16 +0000 (0:00:17.091) 0:02:29.436 ******* 2025-02-10 09:46:31.270271 | orchestrator | 2025-02-10 09:46:31.270284 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-02-10 09:46:31.270296 | orchestrator | Monday 10 February 2025 09:45:16 +0000 (0:00:00.125) 0:02:29.561 ******* 2025-02-10 09:46:31.270309 | orchestrator | 2025-02-10 09:46:31.270321 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-02-10 09:46:31.270334 | orchestrator | Monday 10 February 2025 09:45:16 +0000 (0:00:00.395) 0:02:29.957 ******* 2025-02-10 09:46:31.270346 | orchestrator | 2025-02-10 09:46:31.270358 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-02-10 09:46:31.270370 | orchestrator | Monday 10 February 2025 09:45:16 +0000 (0:00:00.076) 0:02:30.033 ******* 2025-02-10 09:46:31.270383 | orchestrator | 2025-02-10 09:46:31.270395 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-02-10 09:46:31.270407 | orchestrator | Monday 10 February 2025 09:45:16 +0000 (0:00:00.079) 0:02:30.112 ******* 2025-02-10 09:46:31.270419 | orchestrator | 2025-02-10 09:46:31.270438 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-02-10 09:46:31.270450 | orchestrator | Monday 10 February 2025 09:45:17 +0000 (0:00:00.094) 0:02:30.207 ******* 2025-02-10 09:46:31.270462 | orchestrator | 2025-02-10 09:46:31.270475 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-02-10 09:46:31.270487 | orchestrator | Monday 10 February 2025 09:45:17 +0000 (0:00:00.381) 0:02:30.588 ******* 2025-02-10 09:46:31.270499 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:46:31.270512 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:46:31.270525 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:46:31.270537 | orchestrator | 2025-02-10 09:46:31.270549 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-02-10 09:46:31.270561 | orchestrator | Monday 10 February 2025 09:45:36 +0000 (0:00:18.937) 0:02:49.526 ******* 2025-02-10 09:46:31.270582 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:46:31.270594 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:46:31.270606 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:46:31.270618 | orchestrator | 2025-02-10 09:46:31.270630 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-02-10 09:46:31.270642 | orchestrator | Monday 10 February 2025 09:45:48 +0000 (0:00:11.688) 0:03:01.214 ******* 2025-02-10 09:46:31.270655 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:46:31.270667 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:46:31.270679 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:46:31.270696 | orchestrator | 2025-02-10 09:46:31.270708 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-02-10 09:46:31.270720 | orchestrator | Monday 10 February 2025 09:46:16 +0000 (0:00:27.993) 0:03:29.208 ******* 2025-02-10 09:46:31.270732 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:46:31.270744 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:46:31.270757 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:46:31.270769 | orchestrator | 2025-02-10 09:46:31.270781 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-02-10 09:46:31.270793 | orchestrator | Monday 10 February 2025 09:46:28 +0000 (0:00:12.000) 0:03:41.209 ******* 2025-02-10 09:46:31.270806 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:46:31.270818 | orchestrator | 2025-02-10 09:46:31.270830 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:46:31.270842 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-02-10 09:46:31.270855 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-02-10 09:46:31.270868 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-02-10 09:46:31.270880 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-02-10 09:46:31.270893 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-02-10 09:46:31.270905 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-02-10 09:46:31.270917 | orchestrator | 2025-02-10 09:46:31.270930 | orchestrator | 2025-02-10 09:46:31.270942 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:46:31.270954 | orchestrator | Monday 10 February 2025 09:46:28 +0000 (0:00:00.911) 0:03:42.121 ******* 2025-02-10 09:46:31.270967 | orchestrator | =============================================================================== 2025-02-10 09:46:31.270979 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 27.99s 2025-02-10 09:46:31.270991 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 20.07s 2025-02-10 09:46:31.271003 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 18.94s 2025-02-10 09:46:31.271015 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 17.09s 2025-02-10 09:46:31.271027 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 12.00s 2025-02-10 09:46:31.271039 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 11.69s 2025-02-10 09:46:31.271051 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 9.50s 2025-02-10 09:46:31.271069 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 9.21s 2025-02-10 09:46:34.314614 | orchestrator | cinder : Copying over config.json files for services -------------------- 6.56s 2025-02-10 09:46:34.314756 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 5.96s 2025-02-10 09:46:34.314826 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 5.81s 2025-02-10 09:46:34.314841 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 5.58s 2025-02-10 09:46:34.314856 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 5.48s 2025-02-10 09:46:34.314870 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 5.37s 2025-02-10 09:46:34.314901 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS key ------ 5.04s 2025-02-10 09:46:34.314916 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.97s 2025-02-10 09:46:34.314936 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.88s 2025-02-10 09:46:34.314951 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.81s 2025-02-10 09:46:34.314965 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.73s 2025-02-10 09:46:34.314979 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.59s 2025-02-10 09:46:34.314993 | orchestrator | 2025-02-10 09:46:31 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:46:34.315008 | orchestrator | 2025-02-10 09:46:31 | INFO  | Task 1e0de5d1-8269-4076-8f40-877e29ca5616 is in state STARTED 2025-02-10 09:46:34.315022 | orchestrator | 2025-02-10 09:46:31 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:34.315055 | orchestrator | 2025-02-10 09:46:34 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:46:34.316945 | orchestrator | 2025-02-10 09:46:34 | INFO  | Task 9504a7c1-7c2a-4ad2-8253-abc3e3431367 is in state SUCCESS 2025-02-10 09:46:34.318994 | orchestrator | 2025-02-10 09:46:34.319054 | orchestrator | 2025-02-10 09:46:34.319081 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:46:34.319105 | orchestrator | 2025-02-10 09:46:34.319120 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:46:34.319134 | orchestrator | Monday 10 February 2025 09:40:49 +0000 (0:00:00.405) 0:00:00.405 ******* 2025-02-10 09:46:34.319219 | orchestrator | ok: [testbed-manager] 2025-02-10 09:46:34.319249 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:46:34.319273 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:46:34.319289 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:46:34.319302 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:46:34.319316 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:46:34.319330 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:46:34.319349 | orchestrator | 2025-02-10 09:46:34.319373 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:46:34.319397 | orchestrator | Monday 10 February 2025 09:40:50 +0000 (0:00:01.091) 0:00:01.496 ******* 2025-02-10 09:46:34.319459 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-02-10 09:46:34.319476 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-02-10 09:46:34.319531 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-02-10 09:46:34.319547 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-02-10 09:46:34.319560 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-02-10 09:46:34.319577 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-02-10 09:46:34.319593 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-02-10 09:46:34.319620 | orchestrator | 2025-02-10 09:46:34.319636 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-02-10 09:46:34.319651 | orchestrator | 2025-02-10 09:46:34.319667 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-02-10 09:46:34.319762 | orchestrator | Monday 10 February 2025 09:40:51 +0000 (0:00:01.204) 0:00:02.700 ******* 2025-02-10 09:46:34.319885 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:46:34.319935 | orchestrator | 2025-02-10 09:46:34.319964 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-02-10 09:46:34.319985 | orchestrator | Monday 10 February 2025 09:40:53 +0000 (0:00:01.781) 0:00:04.482 ******* 2025-02-10 09:46:34.320002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:46:34.320023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:46:34.320047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:46:34.320096 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-02-10 09:46:34.320114 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:46:34.320137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:46:34.320185 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:46:34.320214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.320239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:46:34.320274 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:46:34.320291 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.320315 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.320330 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:46:34.320345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:46:34.320359 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.320373 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.320388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.320410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.320425 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:46:34.320447 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.320466 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.320491 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:46:34.320516 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.320561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.320598 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.320634 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:46:34.320664 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:46:34.320689 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.320715 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.320736 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:46:34.320760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.320791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.320806 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.320820 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.320835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:46:34.320853 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:46:34.320888 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.320926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:46:34.320951 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:46:34.320966 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-02-10 09:46:34.320981 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.321011 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:46:34.322304 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.322402 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.322421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.322435 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.322448 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.322462 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.322493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.322537 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:46:34.322551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.322601 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:46:34.322615 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.322627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.322658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:46:34.322671 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.322683 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.322704 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.322726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.322746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:46:34.322767 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:46:34.322827 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.322864 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.322887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:46:34.322908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.322927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:46:34.322959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:46:34.322987 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.323021 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.323044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.323065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.323086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.323103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:46:34.323123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.323184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.323200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:46:34.323212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.323224 | orchestrator | 2025-02-10 09:46:34.323236 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-02-10 09:46:34.323248 | orchestrator | Monday 10 February 2025 09:40:57 +0000 (0:00:03.992) 0:00:08.475 ******* 2025-02-10 09:46:34.323261 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:46:34.323273 | orchestrator | 2025-02-10 09:46:34.323285 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-02-10 09:46:34.323296 | orchestrator | Monday 10 February 2025 09:40:59 +0000 (0:00:02.181) 0:00:10.656 ******* 2025-02-10 09:46:34.323308 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:46:34.323320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:46:34.323338 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-02-10 09:46:34.323358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:46:34.323378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:46:34.323391 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:46:34.323402 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:46:34.323414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.323425 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.323443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.323454 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:46:34.323473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.323494 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.323506 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.323518 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.323530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.323552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.323564 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.323584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.323605 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.323617 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.323629 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-02-10 09:46:34.323641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.323659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.323671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.323689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.323709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.323721 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.323733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.323744 | orchestrator | 2025-02-10 09:46:34.323756 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-02-10 09:46:34.323774 | orchestrator | Monday 10 February 2025 09:41:07 +0000 (0:00:07.405) 0:00:18.062 ******* 2025-02-10 09:46:34.323785 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:46:34.323802 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:46:34.323842 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:46:34.323864 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:46:34.323886 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.323905 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:46:34.323925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:46:34.323953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.323965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.323986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:46:34.324006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.324019 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:46:34.324030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:46:34.324042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.324054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.324072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:46:34.324084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.324095 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:46:34.324107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:46:34.324126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.324144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.324205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:46:34.324218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.324237 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:46:34.324249 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:46:34.324260 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:46:34.324272 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:46:34.324294 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:46:34.324306 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:46:34.324324 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:46:34.324336 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:46:34.324348 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:46:34.324359 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:46:34.324376 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:46:34.324387 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:46:34.324399 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:46:34.324414 | orchestrator | 2025-02-10 09:46:34.324433 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-02-10 09:46:34.324452 | orchestrator | Monday 10 February 2025 09:41:12 +0000 (0:00:04.770) 0:00:22.832 ******* 2025-02-10 09:46:34.324480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:46:34.324500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.324528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.324548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:46:34.324568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.324599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:46:34.324620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.324641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.324682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:46:34.324704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.324733 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:46:34.324766 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:46:34.324785 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:46:34.324805 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:46:34.324839 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.324860 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:46:34.324879 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:46:34.324898 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:46:34.324918 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:46:34.324947 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:46:34.324969 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:46:34.324980 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:46:34.324992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:46:34.325004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.325027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.325039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:46:34.325051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.325062 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:46:34.325081 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:46:34.325099 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:46:34.325112 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:46:34.325123 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:46:34.325135 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:46:34.325188 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:46:34.325212 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:46:34.325232 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:46:34.325252 | orchestrator | 2025-02-10 09:46:34.325272 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-02-10 09:46:34.325295 | orchestrator | Monday 10 February 2025 09:41:16 +0000 (0:00:04.703) 0:00:27.535 ******* 2025-02-10 09:46:34.325315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:46:34.325356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:46:34.325940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:46:34.325995 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:46:34.326009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:46:34.326075 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:46:34.326115 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:46:34.326128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:46:34.326229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:46:34.326275 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-02-10 09:46:34.326297 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:46:34.326318 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.326338 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.326383 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:46:34.326404 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.326439 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:46:34.326460 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.326480 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.326499 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.326511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.326534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.326557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.326580 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.326594 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:46:34.326607 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:46:34.326621 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:46:34.326646 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.326666 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.326688 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.326704 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.326724 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.326743 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:46:34.326765 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:46:34.326804 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.326821 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.326855 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.326875 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:46:34.326896 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:46:34.326926 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.326953 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.326988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.327007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.327026 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.327046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.327065 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.327097 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.327131 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.327216 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.327232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.327245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:46:34.327257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:46:34.327276 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.327287 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.327347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.327360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.327370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:46:34.327382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:46:34.327407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:46:34.327424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:46:34.327436 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-02-10 09:46:34.327446 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:46:34.327463 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.327481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.327498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.327509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:46:34.327520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.327531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.327542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.327575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.327594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:46:34.327611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.327636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.327654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:46:34.327668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.327687 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.327705 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:46:34.327716 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.327726 | orchestrator | 2025-02-10 09:46:34.327737 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-02-10 09:46:34.327748 | orchestrator | Monday 10 February 2025 09:41:26 +0000 (0:00:09.831) 0:00:37.367 ******* 2025-02-10 09:46:34.327758 | orchestrator | ok: [testbed-manager -> localhost] 2025-02-10 09:46:34.327769 | orchestrator | 2025-02-10 09:46:34.327779 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-02-10 09:46:34.327789 | orchestrator | Monday 10 February 2025 09:41:27 +0000 (0:00:01.217) 0:00:38.584 ******* 2025-02-10 09:46:34.327799 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1072157, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1739177125.4226432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.327817 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1072157, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1739177125.4226432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.327828 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1072157, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1739177125.4226432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.327839 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1072157, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1739177125.4226432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.327867 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1072157, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1739177125.4226432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.327886 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1072157, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1739177125.4226432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.327904 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1072167, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1739177125.4276433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.327923 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1072167, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1739177125.4276433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.327947 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1072167, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1739177125.4276433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.327965 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1072167, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1739177125.4276433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.327993 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1072167, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1739177125.4276433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.328011 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1072167, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1739177125.4276433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.328023 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1072159, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1739177125.423643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.328033 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1072157, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1739177125.4226432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:46:34.328044 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1072159, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1739177125.423643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.328060 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1072159, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1739177125.423643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.328071 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1072159, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1739177125.423643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.328096 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1072163, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4256432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.328107 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1072159, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1739177125.423643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.328117 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1072159, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1739177125.423643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.328128 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1072163, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4256432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.328138 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1072163, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4256432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.328242 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1072163, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4256432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.328268 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1072204, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4386435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.328287 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1072163, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4256432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.328298 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1072163, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4256432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.328309 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1072183, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4316432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.328319 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1072204, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4386435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.328330 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1072204, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4386435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.328346 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1072204, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4386435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.328743 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1072204, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4386435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.328795 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1072161, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4246433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.328815 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1072167, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1739177125.4276433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:46:34.328832 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1072204, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4386435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.328849 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1072183, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4316432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.328918 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1072183, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4316432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.328941 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1072183, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4316432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.328967 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1072183, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4316432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.329007 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1072172, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4306433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.329025 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1072183, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4316432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.329042 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1072161, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4246433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.329060 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1072161, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4246433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.329118 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1072161, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4246433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.329140 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1072161, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4246433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.329222 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1072203, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4386435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.329243 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1072161, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4246433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.329259 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1072172, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4306433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.329277 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1072172, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4306433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.329295 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1072159, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1739177125.423643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:46:34.329355 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1072172, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4306433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.329381 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1072172, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4306433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.329409 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1072160, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.423643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.329420 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1072172, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4306433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.329431 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1072203, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4386435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.329441 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1072203, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4386435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.329451 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1072203, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4386435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.329488 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1072203, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4386435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.329509 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1072203, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4386435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.329526 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1072188, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1739177125.4346435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.329537 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:46:34.329548 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1072160, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.423643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.329558 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1072160, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.423643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.329569 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1072160, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.423643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.329579 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1072160, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.423643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.329623 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1072160, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.423643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.329653 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1072188, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1739177125.4346435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.329673 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:46:34.329694 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1072188, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1739177125.4346435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.329716 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:46:34.329738 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1072188, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1739177125.4346435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.329759 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:46:34.329778 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1072188, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1739177125.4346435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.329797 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:46:34.329818 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1072188, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1739177125.4346435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:46:34.329838 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:46:34.329859 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1072163, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4256432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:46:34.329949 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1072204, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4386435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:46:34.329985 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1072183, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4316432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:46:34.330004 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1072161, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4246433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:46:34.330061 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1072172, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4306433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:46:34.330080 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1072203, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4386435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:46:34.330097 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1072160, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.423643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:46:34.330108 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1072188, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1739177125.4346435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:46:34.330125 | orchestrator | 2025-02-10 09:46:34.330136 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-02-10 09:46:34.330146 | orchestrator | Monday 10 February 2025 09:42:27 +0000 (0:00:59.485) 0:01:38.070 ******* 2025-02-10 09:46:34.330221 | orchestrator | ok: [testbed-manager -> localhost] 2025-02-10 09:46:34.330232 | orchestrator | 2025-02-10 09:46:34.330278 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-02-10 09:46:34.330291 | orchestrator | Monday 10 February 2025 09:42:27 +0000 (0:00:00.387) 0:01:38.457 ******* 2025-02-10 09:46:34.330301 | orchestrator | [WARNING]: Skipped 2025-02-10 09:46:34.330314 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-10 09:46:34.330324 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-02-10 09:46:34.330335 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-10 09:46:34.330345 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-02-10 09:46:34.330355 | orchestrator | ok: [testbed-manager -> localhost] 2025-02-10 09:46:34.330366 | orchestrator | [WARNING]: Skipped 2025-02-10 09:46:34.330377 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-10 09:46:34.330387 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-02-10 09:46:34.330397 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-10 09:46:34.330407 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-02-10 09:46:34.330418 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-10 09:46:34.330428 | orchestrator | [WARNING]: Skipped 2025-02-10 09:46:34.330439 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-10 09:46:34.330449 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-02-10 09:46:34.330459 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-10 09:46:34.330469 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-02-10 09:46:34.330479 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-02-10 09:46:34.330490 | orchestrator | [WARNING]: Skipped 2025-02-10 09:46:34.330500 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-10 09:46:34.330511 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-02-10 09:46:34.330521 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-10 09:46:34.330531 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-02-10 09:46:34.330541 | orchestrator | [WARNING]: Skipped 2025-02-10 09:46:34.330552 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-10 09:46:34.330562 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-02-10 09:46:34.330572 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-10 09:46:34.330583 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-02-10 09:46:34.330593 | orchestrator | [WARNING]: Skipped 2025-02-10 09:46:34.330603 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-10 09:46:34.330614 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-02-10 09:46:34.330624 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-10 09:46:34.330635 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-02-10 09:46:34.330645 | orchestrator | [WARNING]: Skipped 2025-02-10 09:46:34.330656 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-10 09:46:34.330666 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-02-10 09:46:34.330677 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-10 09:46:34.330695 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-02-10 09:46:34.330705 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-02-10 09:46:34.330716 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-02-10 09:46:34.330726 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-02-10 09:46:34.330735 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-02-10 09:46:34.330744 | orchestrator | 2025-02-10 09:46:34.330753 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-02-10 09:46:34.330761 | orchestrator | Monday 10 February 2025 09:42:29 +0000 (0:00:01.380) 0:01:39.837 ******* 2025-02-10 09:46:34.330770 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-02-10 09:46:34.330778 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:46:34.330787 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-02-10 09:46:34.330796 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:46:34.330804 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-02-10 09:46:34.330813 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:46:34.330822 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-02-10 09:46:34.330831 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:46:34.330839 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-02-10 09:46:34.330848 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:46:34.330857 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-02-10 09:46:34.330865 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:46:34.330874 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-02-10 09:46:34.330883 | orchestrator | 2025-02-10 09:46:34.330892 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-02-10 09:46:34.330901 | orchestrator | Monday 10 February 2025 09:42:54 +0000 (0:00:25.656) 0:02:05.493 ******* 2025-02-10 09:46:34.330952 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-02-10 09:46:34.330963 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:46:34.330972 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-02-10 09:46:34.330981 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:46:34.330989 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-02-10 09:46:34.330998 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:46:34.331007 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-02-10 09:46:34.331016 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:46:34.331024 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-02-10 09:46:34.331033 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:46:34.331042 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-02-10 09:46:34.331051 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:46:34.331067 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-02-10 09:46:34.331076 | orchestrator | 2025-02-10 09:46:34.331085 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-02-10 09:46:34.331094 | orchestrator | Monday 10 February 2025 09:43:00 +0000 (0:00:05.892) 0:02:11.386 ******* 2025-02-10 09:46:34.331103 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-02-10 09:46:34.331112 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:46:34.331122 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-02-10 09:46:34.331137 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:46:34.331146 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-02-10 09:46:34.331177 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:46:34.331187 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-02-10 09:46:34.331195 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:46:34.331204 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-02-10 09:46:34.331215 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:46:34.331230 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-02-10 09:46:34.331245 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:46:34.331260 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-02-10 09:46:34.331275 | orchestrator | 2025-02-10 09:46:34.331289 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-02-10 09:46:34.331304 | orchestrator | Monday 10 February 2025 09:43:04 +0000 (0:00:03.642) 0:02:15.028 ******* 2025-02-10 09:46:34.331319 | orchestrator | ok: [testbed-manager -> localhost] 2025-02-10 09:46:34.331334 | orchestrator | 2025-02-10 09:46:34.331350 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-02-10 09:46:34.331364 | orchestrator | Monday 10 February 2025 09:43:04 +0000 (0:00:00.494) 0:02:15.523 ******* 2025-02-10 09:46:34.331377 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:46:34.331391 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:46:34.331405 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:46:34.331419 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:46:34.331445 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:46:34.331460 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:46:34.331473 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:46:34.331481 | orchestrator | 2025-02-10 09:46:34.331490 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-02-10 09:46:34.331499 | orchestrator | Monday 10 February 2025 09:43:05 +0000 (0:00:00.704) 0:02:16.227 ******* 2025-02-10 09:46:34.331508 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:46:34.331516 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:46:34.331525 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:46:34.331534 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:46:34.331542 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:46:34.331551 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:46:34.331559 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:46:34.331568 | orchestrator | 2025-02-10 09:46:34.331576 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-02-10 09:46:34.331585 | orchestrator | Monday 10 February 2025 09:43:09 +0000 (0:00:04.107) 0:02:20.335 ******* 2025-02-10 09:46:34.331594 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-02-10 09:46:34.331602 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:46:34.331611 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-02-10 09:46:34.331619 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:46:34.331628 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-02-10 09:46:34.331637 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:46:34.331645 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-02-10 09:46:34.331654 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:46:34.331674 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-02-10 09:46:34.331683 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:46:34.331692 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-02-10 09:46:34.331701 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:46:34.331709 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-02-10 09:46:34.331718 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:46:34.331726 | orchestrator | 2025-02-10 09:46:34.331735 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-02-10 09:46:34.331744 | orchestrator | Monday 10 February 2025 09:43:14 +0000 (0:00:04.485) 0:02:24.821 ******* 2025-02-10 09:46:34.331752 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-02-10 09:46:34.331761 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:46:34.331770 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-02-10 09:46:34.331778 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:46:34.331787 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-02-10 09:46:34.331796 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:46:34.331804 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-02-10 09:46:34.331813 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:46:34.331821 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-02-10 09:46:34.331830 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:46:34.331838 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-02-10 09:46:34.331903 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:46:34.331914 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-02-10 09:46:34.331923 | orchestrator | 2025-02-10 09:46:34.331933 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-02-10 09:46:34.331946 | orchestrator | Monday 10 February 2025 09:43:20 +0000 (0:00:06.398) 0:02:31.219 ******* 2025-02-10 09:46:34.331961 | orchestrator | [WARNING]: Skipped 2025-02-10 09:46:34.331976 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-02-10 09:46:34.331990 | orchestrator | due to this access issue: 2025-02-10 09:46:34.332005 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-02-10 09:46:34.332021 | orchestrator | not a directory 2025-02-10 09:46:34.332036 | orchestrator | ok: [testbed-manager -> localhost] 2025-02-10 09:46:34.332049 | orchestrator | 2025-02-10 09:46:34.332060 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-02-10 09:46:34.332069 | orchestrator | Monday 10 February 2025 09:43:24 +0000 (0:00:04.043) 0:02:35.263 ******* 2025-02-10 09:46:34.332077 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:46:34.332086 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:46:34.332094 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:46:34.332103 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:46:34.332111 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:46:34.332119 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:46:34.332128 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:46:34.332137 | orchestrator | 2025-02-10 09:46:34.332145 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-02-10 09:46:34.332174 | orchestrator | Monday 10 February 2025 09:43:26 +0000 (0:00:02.142) 0:02:37.405 ******* 2025-02-10 09:46:34.332183 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:46:34.332199 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:46:34.332213 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:46:34.332227 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:46:34.332241 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:46:34.332253 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:46:34.332267 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:46:34.332286 | orchestrator | 2025-02-10 09:46:34.332300 | orchestrator | TASK [prometheus : Copying over prometheus msteams config file] **************** 2025-02-10 09:46:34.332315 | orchestrator | Monday 10 February 2025 09:43:28 +0000 (0:00:02.171) 0:02:39.577 ******* 2025-02-10 09:46:34.332329 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-02-10 09:46:34.332343 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:46:34.332357 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-02-10 09:46:34.332371 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:46:34.332386 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-02-10 09:46:34.332400 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:46:34.332415 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-02-10 09:46:34.332431 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:46:34.332445 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-02-10 09:46:34.332460 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:46:34.332480 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-02-10 09:46:34.332494 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:46:34.332518 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-02-10 09:46:34.332533 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:46:34.332548 | orchestrator | 2025-02-10 09:46:34.332563 | orchestrator | TASK [prometheus : Copying over prometheus msteams template file] ************** 2025-02-10 09:46:34.332578 | orchestrator | Monday 10 February 2025 09:43:34 +0000 (0:00:05.940) 0:02:45.517 ******* 2025-02-10 09:46:34.332591 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-02-10 09:46:34.332605 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:46:34.332619 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-02-10 09:46:34.332633 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:46:34.332645 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-02-10 09:46:34.332659 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:46:34.332674 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-02-10 09:46:34.332687 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:46:34.332702 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-02-10 09:46:34.332716 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:46:34.332730 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-02-10 09:46:34.332745 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:46:34.332758 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-02-10 09:46:34.332772 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:46:34.332785 | orchestrator | 2025-02-10 09:46:34.332799 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-02-10 09:46:34.332813 | orchestrator | Monday 10 February 2025 09:43:40 +0000 (0:00:05.547) 0:02:51.065 ******* 2025-02-10 09:46:34.332847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:46:34.332875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:46:34.332890 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:46:34.332914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:46:34.332939 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:46:34.332962 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:46:34.332976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:46:34.332991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:46:34.333006 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-02-10 09:46:34.333043 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:46:34.333071 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.333087 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.333110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:46:34.333125 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:46:34.333228 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.333253 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.333286 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:46:34.333303 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.333326 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.333351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.333367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.333383 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.333399 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:46:34.333436 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:46:34.333452 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.333479 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:46:34.333494 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.333510 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.333526 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.333552 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.333575 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:46:34.333599 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:46:34.333615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.333631 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.333647 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.333672 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.333695 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:46:34.333717 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:46:34.333732 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.333747 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.333770 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.333785 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.333800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.333820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.333842 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.333857 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.333880 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.333895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.333910 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.333925 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.333946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.333967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:46:34.333990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.334006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:46:34.334053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:46:34.334076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:46:34.334097 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-02-10 09:46:34.334122 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:46:34.334137 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.334171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:46:34.334200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:46:34.334222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:46:34.334236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.334250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.334265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:46:34.334279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.334294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.334321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.334347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:46:34.334361 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.334375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.334389 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:46:34.334402 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.334416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:46:34.334441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.334463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:46:34.334477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:46:34.334491 | orchestrator | 2025-02-10 09:46:34.334506 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-02-10 09:46:34.334520 | orchestrator | Monday 10 February 2025 09:43:49 +0000 (0:00:09.121) 0:03:00.187 ******* 2025-02-10 09:46:34.334535 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-02-10 09:46:34.334550 | orchestrator | 2025-02-10 09:46:34.334563 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-02-10 09:46:34.334577 | orchestrator | Monday 10 February 2025 09:43:53 +0000 (0:00:04.192) 0:03:04.379 ******* 2025-02-10 09:46:34.334590 | orchestrator | 2025-02-10 09:46:34.334603 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-02-10 09:46:34.334616 | orchestrator | Monday 10 February 2025 09:43:54 +0000 (0:00:00.569) 0:03:04.948 ******* 2025-02-10 09:46:34.334630 | orchestrator | 2025-02-10 09:46:34.334643 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-02-10 09:46:34.334657 | orchestrator | Monday 10 February 2025 09:43:54 +0000 (0:00:00.110) 0:03:05.059 ******* 2025-02-10 09:46:34.334670 | orchestrator | 2025-02-10 09:46:34.334684 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-02-10 09:46:34.334697 | orchestrator | Monday 10 February 2025 09:43:54 +0000 (0:00:00.130) 0:03:05.190 ******* 2025-02-10 09:46:34.334713 | orchestrator | 2025-02-10 09:46:34.334727 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-02-10 09:46:34.334741 | orchestrator | Monday 10 February 2025 09:43:54 +0000 (0:00:00.132) 0:03:05.322 ******* 2025-02-10 09:46:34.334756 | orchestrator | 2025-02-10 09:46:34.334770 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-02-10 09:46:34.334784 | orchestrator | Monday 10 February 2025 09:43:55 +0000 (0:00:00.497) 0:03:05.819 ******* 2025-02-10 09:46:34.334797 | orchestrator | 2025-02-10 09:46:34.334811 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-02-10 09:46:34.334824 | orchestrator | Monday 10 February 2025 09:43:55 +0000 (0:00:00.074) 0:03:05.894 ******* 2025-02-10 09:46:34.334837 | orchestrator | 2025-02-10 09:46:34.334857 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-02-10 09:46:34.334870 | orchestrator | Monday 10 February 2025 09:43:55 +0000 (0:00:00.090) 0:03:05.984 ******* 2025-02-10 09:46:34.334883 | orchestrator | changed: [testbed-manager] 2025-02-10 09:46:34.334897 | orchestrator | 2025-02-10 09:46:34.334910 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-02-10 09:46:34.334927 | orchestrator | Monday 10 February 2025 09:44:24 +0000 (0:00:29.021) 0:03:35.006 ******* 2025-02-10 09:46:34.334940 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:46:34.334954 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:46:34.334967 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:46:34.334981 | orchestrator | changed: [testbed-manager] 2025-02-10 09:46:34.334994 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:46:34.335008 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:46:34.335022 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:46:34.335036 | orchestrator | 2025-02-10 09:46:34.335051 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-02-10 09:46:34.335065 | orchestrator | Monday 10 February 2025 09:44:55 +0000 (0:00:30.830) 0:04:05.837 ******* 2025-02-10 09:46:34.335080 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:46:34.335094 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:46:34.335107 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:46:34.335121 | orchestrator | 2025-02-10 09:46:34.335135 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-02-10 09:46:34.335166 | orchestrator | Monday 10 February 2025 09:45:08 +0000 (0:00:13.520) 0:04:19.357 ******* 2025-02-10 09:46:34.335180 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:46:34.335194 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:46:34.335207 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:46:34.335221 | orchestrator | 2025-02-10 09:46:34.335234 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-02-10 09:46:34.335247 | orchestrator | Monday 10 February 2025 09:45:19 +0000 (0:00:10.573) 0:04:29.931 ******* 2025-02-10 09:46:34.335260 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:46:34.335274 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:46:34.335287 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:46:34.335301 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:46:34.335314 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:46:34.335333 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:46:34.335347 | orchestrator | changed: [testbed-manager] 2025-02-10 09:46:34.335361 | orchestrator | 2025-02-10 09:46:34.335374 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-02-10 09:46:34.335388 | orchestrator | Monday 10 February 2025 09:45:39 +0000 (0:00:20.749) 0:04:50.680 ******* 2025-02-10 09:46:34.335402 | orchestrator | changed: [testbed-manager] 2025-02-10 09:46:34.335415 | orchestrator | 2025-02-10 09:46:34.335429 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-02-10 09:46:34.335442 | orchestrator | Monday 10 February 2025 09:45:52 +0000 (0:00:12.562) 0:05:03.243 ******* 2025-02-10 09:46:34.335456 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:46:34.335469 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:46:34.335482 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:46:34.335496 | orchestrator | 2025-02-10 09:46:34.335509 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-02-10 09:46:34.335522 | orchestrator | Monday 10 February 2025 09:46:11 +0000 (0:00:19.330) 0:05:22.573 ******* 2025-02-10 09:46:34.335535 | orchestrator | changed: [testbed-manager] 2025-02-10 09:46:34.335549 | orchestrator | 2025-02-10 09:46:34.335563 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-02-10 09:46:34.335576 | orchestrator | Monday 10 February 2025 09:46:21 +0000 (0:00:09.158) 0:05:31.731 ******* 2025-02-10 09:46:34.335588 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:46:34.335601 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:46:34.335622 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:46:34.335635 | orchestrator | 2025-02-10 09:46:34.335648 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:46:34.335662 | orchestrator | testbed-manager : ok=24  changed=15  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-02-10 09:46:34.335677 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-02-10 09:46:34.335691 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-02-10 09:46:34.335706 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-02-10 09:46:34.335719 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-02-10 09:46:34.335733 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-02-10 09:46:34.335747 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-02-10 09:46:34.335760 | orchestrator | 2025-02-10 09:46:34.335774 | orchestrator | 2025-02-10 09:46:34.335788 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:46:34.335802 | orchestrator | Monday 10 February 2025 09:46:30 +0000 (0:00:09.883) 0:05:41.615 ******* 2025-02-10 09:46:34.335816 | orchestrator | =============================================================================== 2025-02-10 09:46:34.335829 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 59.48s 2025-02-10 09:46:34.335843 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 30.83s 2025-02-10 09:46:34.335856 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 29.02s 2025-02-10 09:46:34.335870 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 25.66s 2025-02-10 09:46:34.335883 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 20.75s 2025-02-10 09:46:34.335896 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 19.33s 2025-02-10 09:46:34.335909 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 13.52s 2025-02-10 09:46:34.335922 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 12.56s 2025-02-10 09:46:34.335936 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.57s 2025-02-10 09:46:34.335949 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 9.88s 2025-02-10 09:46:34.335962 | orchestrator | prometheus : Copying over config.json files ----------------------------- 9.83s 2025-02-10 09:46:34.335975 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 9.16s 2025-02-10 09:46:34.335989 | orchestrator | prometheus : Check prometheus containers -------------------------------- 9.12s 2025-02-10 09:46:34.336007 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 7.41s 2025-02-10 09:46:34.336021 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 6.40s 2025-02-10 09:46:34.336035 | orchestrator | prometheus : Copying over prometheus msteams config file ---------------- 5.94s 2025-02-10 09:46:34.336048 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 5.89s 2025-02-10 09:46:34.336061 | orchestrator | prometheus : Copying over prometheus msteams template file -------------- 5.55s 2025-02-10 09:46:34.336075 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 4.77s 2025-02-10 09:46:34.336089 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 4.70s 2025-02-10 09:46:34.336113 | orchestrator | 2025-02-10 09:46:34 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:46:37.372844 | orchestrator | 2025-02-10 09:46:34 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:46:37.373022 | orchestrator | 2025-02-10 09:46:34 | INFO  | Task 1e0de5d1-8269-4076-8f40-877e29ca5616 is in state STARTED 2025-02-10 09:46:37.373044 | orchestrator | 2025-02-10 09:46:34 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:37.373080 | orchestrator | 2025-02-10 09:46:37 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:46:37.373731 | orchestrator | 2025-02-10 09:46:37 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:46:37.374981 | orchestrator | 2025-02-10 09:46:37 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:46:37.376486 | orchestrator | 2025-02-10 09:46:37 | INFO  | Task 1e0de5d1-8269-4076-8f40-877e29ca5616 is in state STARTED 2025-02-10 09:46:37.376779 | orchestrator | 2025-02-10 09:46:37 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:40.416549 | orchestrator | 2025-02-10 09:46:40 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:46:40.417576 | orchestrator | 2025-02-10 09:46:40 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:46:40.417663 | orchestrator | 2025-02-10 09:46:40 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:46:40.418355 | orchestrator | 2025-02-10 09:46:40 | INFO  | Task 1e0de5d1-8269-4076-8f40-877e29ca5616 is in state STARTED 2025-02-10 09:46:43.463915 | orchestrator | 2025-02-10 09:46:40 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:43.464084 | orchestrator | 2025-02-10 09:46:43 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:46:43.466070 | orchestrator | 2025-02-10 09:46:43 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:46:43.468094 | orchestrator | 2025-02-10 09:46:43 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:46:43.470395 | orchestrator | 2025-02-10 09:46:43 | INFO  | Task 1e0de5d1-8269-4076-8f40-877e29ca5616 is in state STARTED 2025-02-10 09:46:46.513059 | orchestrator | 2025-02-10 09:46:43 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:46.513238 | orchestrator | 2025-02-10 09:46:46 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:46:46.513942 | orchestrator | 2025-02-10 09:46:46 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:46:46.513980 | orchestrator | 2025-02-10 09:46:46 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:46:49.553691 | orchestrator | 2025-02-10 09:46:46 | INFO  | Task 1e0de5d1-8269-4076-8f40-877e29ca5616 is in state STARTED 2025-02-10 09:46:49.554110 | orchestrator | 2025-02-10 09:46:46 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:49.554268 | orchestrator | 2025-02-10 09:46:49 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:46:49.557144 | orchestrator | 2025-02-10 09:46:49 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:46:49.557207 | orchestrator | 2025-02-10 09:46:49 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:46:52.629320 | orchestrator | 2025-02-10 09:46:49 | INFO  | Task 1e0de5d1-8269-4076-8f40-877e29ca5616 is in state STARTED 2025-02-10 09:46:52.629437 | orchestrator | 2025-02-10 09:46:49 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:52.629495 | orchestrator | 2025-02-10 09:46:52 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:46:52.634930 | orchestrator | 2025-02-10 09:46:52 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:46:52.637746 | orchestrator | 2025-02-10 09:46:52 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:46:52.640286 | orchestrator | 2025-02-10 09:46:52 | INFO  | Task 1e0de5d1-8269-4076-8f40-877e29ca5616 is in state STARTED 2025-02-10 09:46:55.705092 | orchestrator | 2025-02-10 09:46:52 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:55.705302 | orchestrator | 2025-02-10 09:46:55 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:46:58.737898 | orchestrator | 2025-02-10 09:46:55 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:46:58.738313 | orchestrator | 2025-02-10 09:46:55 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:46:58.738349 | orchestrator | 2025-02-10 09:46:55 | INFO  | Task 1e0de5d1-8269-4076-8f40-877e29ca5616 is in state STARTED 2025-02-10 09:46:58.738366 | orchestrator | 2025-02-10 09:46:55 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:58.738427 | orchestrator | 2025-02-10 09:46:58 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:46:58.738801 | orchestrator | 2025-02-10 09:46:58 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:46:58.738845 | orchestrator | 2025-02-10 09:46:58 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:46:58.739770 | orchestrator | 2025-02-10 09:46:58 | INFO  | Task 1e0de5d1-8269-4076-8f40-877e29ca5616 is in state STARTED 2025-02-10 09:47:01.788048 | orchestrator | 2025-02-10 09:46:58 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:01.788234 | orchestrator | 2025-02-10 09:47:01 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:47:04.833541 | orchestrator | 2025-02-10 09:47:01 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:47:04.833675 | orchestrator | 2025-02-10 09:47:01 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:47:04.833693 | orchestrator | 2025-02-10 09:47:01 | INFO  | Task 1e0de5d1-8269-4076-8f40-877e29ca5616 is in state STARTED 2025-02-10 09:47:04.833709 | orchestrator | 2025-02-10 09:47:01 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:04.833743 | orchestrator | 2025-02-10 09:47:04 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:47:04.835849 | orchestrator | 2025-02-10 09:47:04 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:47:04.835913 | orchestrator | 2025-02-10 09:47:04 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:47:04.837449 | orchestrator | 2025-02-10 09:47:04 | INFO  | Task 1e0de5d1-8269-4076-8f40-877e29ca5616 is in state STARTED 2025-02-10 09:47:07.873661 | orchestrator | 2025-02-10 09:47:04 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:07.873817 | orchestrator | 2025-02-10 09:47:07 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:47:07.877231 | orchestrator | 2025-02-10 09:47:07 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:47:07.877281 | orchestrator | 2025-02-10 09:47:07 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:47:10.935904 | orchestrator | 2025-02-10 09:47:07 | INFO  | Task 1e0de5d1-8269-4076-8f40-877e29ca5616 is in state STARTED 2025-02-10 09:47:10.936045 | orchestrator | 2025-02-10 09:47:07 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:10.936085 | orchestrator | 2025-02-10 09:47:10 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:47:10.937449 | orchestrator | 2025-02-10 09:47:10 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:47:10.938791 | orchestrator | 2025-02-10 09:47:10 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:47:10.940957 | orchestrator | 2025-02-10 09:47:10 | INFO  | Task 1e0de5d1-8269-4076-8f40-877e29ca5616 is in state STARTED 2025-02-10 09:47:10.941071 | orchestrator | 2025-02-10 09:47:10 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:13.984890 | orchestrator | 2025-02-10 09:47:13 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:47:13.985219 | orchestrator | 2025-02-10 09:47:13 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:47:13.986522 | orchestrator | 2025-02-10 09:47:13 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:47:13.987839 | orchestrator | 2025-02-10 09:47:13 | INFO  | Task 1e0de5d1-8269-4076-8f40-877e29ca5616 is in state STARTED 2025-02-10 09:47:17.036075 | orchestrator | 2025-02-10 09:47:13 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:17.036277 | orchestrator | 2025-02-10 09:47:17 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:47:20.089984 | orchestrator | 2025-02-10 09:47:17 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:47:20.090213 | orchestrator | 2025-02-10 09:47:17 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:47:20.090228 | orchestrator | 2025-02-10 09:47:17 | INFO  | Task 1e0de5d1-8269-4076-8f40-877e29ca5616 is in state STARTED 2025-02-10 09:47:20.090236 | orchestrator | 2025-02-10 09:47:17 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:20.090257 | orchestrator | 2025-02-10 09:47:20 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:47:20.090740 | orchestrator | 2025-02-10 09:47:20 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:47:20.090768 | orchestrator | 2025-02-10 09:47:20 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:47:20.091433 | orchestrator | 2025-02-10 09:47:20 | INFO  | Task 1e0de5d1-8269-4076-8f40-877e29ca5616 is in state STARTED 2025-02-10 09:47:23.145622 | orchestrator | 2025-02-10 09:47:20 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:23.145767 | orchestrator | 2025-02-10 09:47:23 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:47:23.148364 | orchestrator | 2025-02-10 09:47:23 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:47:23.148796 | orchestrator | 2025-02-10 09:47:23 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:47:23.149658 | orchestrator | 2025-02-10 09:47:23 | INFO  | Task 1e0de5d1-8269-4076-8f40-877e29ca5616 is in state STARTED 2025-02-10 09:47:26.221116 | orchestrator | 2025-02-10 09:47:23 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:26.221395 | orchestrator | 2025-02-10 09:47:26 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:47:26.221936 | orchestrator | 2025-02-10 09:47:26 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:47:26.222001 | orchestrator | 2025-02-10 09:47:26 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:47:26.222783 | orchestrator | 2025-02-10 09:47:26 | INFO  | Task 1e0de5d1-8269-4076-8f40-877e29ca5616 is in state STARTED 2025-02-10 09:47:29.280354 | orchestrator | 2025-02-10 09:47:26 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:29.280641 | orchestrator | 2025-02-10 09:47:29 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:47:32.340476 | orchestrator | 2025-02-10 09:47:29 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:47:32.340620 | orchestrator | 2025-02-10 09:47:29 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:47:32.340640 | orchestrator | 2025-02-10 09:47:29 | INFO  | Task 1e0de5d1-8269-4076-8f40-877e29ca5616 is in state SUCCESS 2025-02-10 09:47:32.340657 | orchestrator | 2025-02-10 09:47:29 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:32.340693 | orchestrator | 2025-02-10 09:47:32 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:47:32.341341 | orchestrator | 2025-02-10 09:47:32 | INFO  | Task cd7a7c31-213e-4279-a14c-158f0a11d104 is in state STARTED 2025-02-10 09:47:32.341378 | orchestrator | 2025-02-10 09:47:32 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:47:32.342585 | orchestrator | 2025-02-10 09:47:32 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:47:35.385114 | orchestrator | 2025-02-10 09:47:32 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:35.385296 | orchestrator | 2025-02-10 09:47:35 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:47:35.386284 | orchestrator | 2025-02-10 09:47:35 | INFO  | Task cd7a7c31-213e-4279-a14c-158f0a11d104 is in state STARTED 2025-02-10 09:47:35.386329 | orchestrator | 2025-02-10 09:47:35 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:47:35.388528 | orchestrator | 2025-02-10 09:47:35 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:47:38.434457 | orchestrator | 2025-02-10 09:47:35 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:38.434776 | orchestrator | 2025-02-10 09:47:38 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:47:38.436339 | orchestrator | 2025-02-10 09:47:38 | INFO  | Task cd7a7c31-213e-4279-a14c-158f0a11d104 is in state STARTED 2025-02-10 09:47:38.436393 | orchestrator | 2025-02-10 09:47:38 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state STARTED 2025-02-10 09:47:38.438449 | orchestrator | 2025-02-10 09:47:38 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:47:41.481139 | orchestrator | 2025-02-10 09:47:38 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:41.481323 | orchestrator | 2025-02-10 09:47:41 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:47:41.481540 | orchestrator | 2025-02-10 09:47:41 | INFO  | Task cd7a7c31-213e-4279-a14c-158f0a11d104 is in state STARTED 2025-02-10 09:47:41.481572 | orchestrator | 2025-02-10 09:47:41 | INFO  | Task 67a97af0-1ee5-4eea-99fc-0e5c31203fba is in state SUCCESS 2025-02-10 09:47:41.483017 | orchestrator | 2025-02-10 09:47:41.483058 | orchestrator | 2025-02-10 09:47:41.483075 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:47:41.483091 | orchestrator | 2025-02-10 09:47:41.483136 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:47:41.483151 | orchestrator | Monday 10 February 2025 09:46:33 +0000 (0:00:00.332) 0:00:00.332 ******* 2025-02-10 09:47:41.483191 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:47:41.483210 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:47:41.483225 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:47:41.483239 | orchestrator | 2025-02-10 09:47:41.483254 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:47:41.483269 | orchestrator | Monday 10 February 2025 09:46:34 +0000 (0:00:00.445) 0:00:00.778 ******* 2025-02-10 09:47:41.483283 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-02-10 09:47:41.483298 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-02-10 09:47:41.483313 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-02-10 09:47:41.483327 | orchestrator | 2025-02-10 09:47:41.483341 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-02-10 09:47:41.483356 | orchestrator | 2025-02-10 09:47:41.483370 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-02-10 09:47:41.483384 | orchestrator | Monday 10 February 2025 09:46:34 +0000 (0:00:00.557) 0:00:01.335 ******* 2025-02-10 09:47:41.483398 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:47:41.483413 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:47:41.483427 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:47:41.483442 | orchestrator | 2025-02-10 09:47:41.483456 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:47:41.483472 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:47:41.483487 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:47:41.483502 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:47:41.483516 | orchestrator | 2025-02-10 09:47:41.483530 | orchestrator | 2025-02-10 09:47:41.483544 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:47:41.484047 | orchestrator | Monday 10 February 2025 09:47:27 +0000 (0:00:53.128) 0:00:54.463 ******* 2025-02-10 09:47:41.484065 | orchestrator | =============================================================================== 2025-02-10 09:47:41.484078 | orchestrator | Waiting for Nova public port to be UP ---------------------------------- 53.13s 2025-02-10 09:47:41.484092 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.56s 2025-02-10 09:47:41.484106 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.45s 2025-02-10 09:47:41.484120 | orchestrator | 2025-02-10 09:47:41.484134 | orchestrator | 2025-02-10 09:47:41.484148 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:47:41.484189 | orchestrator | 2025-02-10 09:47:41.484204 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:47:41.484218 | orchestrator | Monday 10 February 2025 09:42:38 +0000 (0:00:00.572) 0:00:00.572 ******* 2025-02-10 09:47:41.484232 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:47:41.484247 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:47:41.484280 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:47:41.484294 | orchestrator | 2025-02-10 09:47:41.484308 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:47:41.484323 | orchestrator | Monday 10 February 2025 09:42:39 +0000 (0:00:00.465) 0:00:01.038 ******* 2025-02-10 09:47:41.484336 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-02-10 09:47:41.484351 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-02-10 09:47:41.484365 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-02-10 09:47:41.484378 | orchestrator | 2025-02-10 09:47:41.484392 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-02-10 09:47:41.484418 | orchestrator | 2025-02-10 09:47:41.484432 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-02-10 09:47:41.484446 | orchestrator | Monday 10 February 2025 09:42:39 +0000 (0:00:00.400) 0:00:01.438 ******* 2025-02-10 09:47:41.484460 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:47:41.484474 | orchestrator | 2025-02-10 09:47:41.484488 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-02-10 09:47:41.484502 | orchestrator | Monday 10 February 2025 09:42:41 +0000 (0:00:01.362) 0:00:02.800 ******* 2025-02-10 09:47:41.484516 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-02-10 09:47:41.484536 | orchestrator | 2025-02-10 09:47:41.484551 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-02-10 09:47:41.484565 | orchestrator | Monday 10 February 2025 09:42:45 +0000 (0:00:04.097) 0:00:06.898 ******* 2025-02-10 09:47:41.484579 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-02-10 09:47:41.484593 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-02-10 09:47:41.484607 | orchestrator | 2025-02-10 09:47:41.484621 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-02-10 09:47:41.484635 | orchestrator | Monday 10 February 2025 09:42:52 +0000 (0:00:07.189) 0:00:14.088 ******* 2025-02-10 09:47:41.484651 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-02-10 09:47:41.484668 | orchestrator | 2025-02-10 09:47:41.484684 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-02-10 09:47:41.484700 | orchestrator | Monday 10 February 2025 09:42:56 +0000 (0:00:03.951) 0:00:18.040 ******* 2025-02-10 09:47:41.484715 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-02-10 09:47:41.484738 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-02-10 09:47:41.484754 | orchestrator | 2025-02-10 09:47:41.484771 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-02-10 09:47:41.484786 | orchestrator | Monday 10 February 2025 09:43:00 +0000 (0:00:03.924) 0:00:21.964 ******* 2025-02-10 09:47:41.484802 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-02-10 09:47:41.484817 | orchestrator | 2025-02-10 09:47:41.484833 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-02-10 09:47:41.484848 | orchestrator | Monday 10 February 2025 09:43:03 +0000 (0:00:03.016) 0:00:24.980 ******* 2025-02-10 09:47:41.484864 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-02-10 09:47:41.484879 | orchestrator | 2025-02-10 09:47:41.484895 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-02-10 09:47:41.484910 | orchestrator | Monday 10 February 2025 09:43:07 +0000 (0:00:04.316) 0:00:29.297 ******* 2025-02-10 09:47:41.484929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-02-10 09:47:41.485002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-02-10 09:47:41.485032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-10 09:47:41.485055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-10 09:47:41.485093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-02-10 09:47:41.485109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-10 09:47:41.485140 | orchestrator | 2025-02-10 09:47:41.485155 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-02-10 09:47:41.485189 | orchestrator | Monday 10 February 2025 09:43:13 +0000 (0:00:05.965) 0:00:35.262 ******* 2025-02-10 09:47:41.485204 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:47:41.485224 | orchestrator | 2025-02-10 09:47:41.485238 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-02-10 09:47:41.485252 | orchestrator | Monday 10 February 2025 09:43:14 +0000 (0:00:00.595) 0:00:35.857 ******* 2025-02-10 09:47:41.485266 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:47:41.485281 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:47:41.485295 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:47:41.485309 | orchestrator | 2025-02-10 09:47:41.485323 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-02-10 09:47:41.485343 | orchestrator | Monday 10 February 2025 09:43:30 +0000 (0:00:16.451) 0:00:52.309 ******* 2025-02-10 09:47:41.485358 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-02-10 09:47:41.485372 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-02-10 09:47:41.485386 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-02-10 09:47:41.485399 | orchestrator | 2025-02-10 09:47:41.485413 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-02-10 09:47:41.485427 | orchestrator | Monday 10 February 2025 09:43:34 +0000 (0:00:04.079) 0:00:56.388 ******* 2025-02-10 09:47:41.485441 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-02-10 09:47:41.485455 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-02-10 09:47:41.485468 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-02-10 09:47:41.485482 | orchestrator | 2025-02-10 09:47:41.485496 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-02-10 09:47:41.485517 | orchestrator | Monday 10 February 2025 09:43:36 +0000 (0:00:01.934) 0:00:58.322 ******* 2025-02-10 09:47:41.485531 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:47:41.485545 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:47:41.485559 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:47:41.485572 | orchestrator | 2025-02-10 09:47:41.485586 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-02-10 09:47:41.485600 | orchestrator | Monday 10 February 2025 09:43:38 +0000 (0:00:01.963) 0:01:00.286 ******* 2025-02-10 09:47:41.485614 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:47:41.485627 | orchestrator | 2025-02-10 09:47:41.485641 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-02-10 09:47:41.485655 | orchestrator | Monday 10 February 2025 09:43:39 +0000 (0:00:00.361) 0:01:00.648 ******* 2025-02-10 09:47:41.485668 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:47:41.485682 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:47:41.485696 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:47:41.485710 | orchestrator | 2025-02-10 09:47:41.485723 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-02-10 09:47:41.485737 | orchestrator | Monday 10 February 2025 09:43:39 +0000 (0:00:00.626) 0:01:01.275 ******* 2025-02-10 09:47:41.485750 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:47:41.485764 | orchestrator | 2025-02-10 09:47:41.485778 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-02-10 09:47:41.485792 | orchestrator | Monday 10 February 2025 09:43:40 +0000 (0:00:01.212) 0:01:02.487 ******* 2025-02-10 09:47:41.485807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-02-10 09:47:41.485842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-02-10 09:47:41.485866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-02-10 09:47:41.485890 | orchestrator | 2025-02-10 09:47:41.485905 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-02-10 09:47:41.485924 | orchestrator | Monday 10 February 2025 09:43:50 +0000 (0:00:09.935) 0:01:12.423 ******* 2025-02-10 09:47:41.485947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-02-10 09:47:41.485970 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:47:41.485984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-02-10 09:47:41.486009 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:47:41.486099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-02-10 09:47:41.486124 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:47:41.486138 | orchestrator | 2025-02-10 09:47:41.486152 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-02-10 09:47:41.486197 | orchestrator | Monday 10 February 2025 09:43:58 +0000 (0:00:07.359) 0:01:19.782 ******* 2025-02-10 09:47:41.486213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-02-10 09:47:41.486228 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:47:41.486242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-02-10 09:47:41.486285 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:47:41.486301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-02-10 09:47:41.486316 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:47:41.486330 | orchestrator | 2025-02-10 09:47:41.486344 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-02-10 09:47:41.486358 | orchestrator | Monday 10 February 2025 09:44:10 +0000 (0:00:12.392) 0:01:32.175 ******* 2025-02-10 09:47:41.486372 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:47:41.486386 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:47:41.486399 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:47:41.486413 | orchestrator | 2025-02-10 09:47:41.486427 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-02-10 09:47:41.486441 | orchestrator | Monday 10 February 2025 09:44:20 +0000 (0:00:10.042) 0:01:42.217 ******* 2025-02-10 09:47:41.486455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-02-10 09:47:41.486496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-10 09:47:41.486521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-02-10 09:47:41.486557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-10 09:47:41.486573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-02-10 09:47:41.486605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-10 09:47:41.486629 | orchestrator | 2025-02-10 09:47:41.486643 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-02-10 09:47:41.486657 | orchestrator | Monday 10 February 2025 09:44:36 +0000 (0:00:15.435) 0:01:57.653 ******* 2025-02-10 09:47:41.486671 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:47:41.486684 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:47:41.486698 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:47:41.486712 | orchestrator | 2025-02-10 09:47:41.486726 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-02-10 09:47:41.486739 | orchestrator | Monday 10 February 2025 09:44:57 +0000 (0:00:21.207) 0:02:18.861 ******* 2025-02-10 09:47:41.486753 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:47:41.486767 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:47:41.486781 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:47:41.486794 | orchestrator | 2025-02-10 09:47:41.486808 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-02-10 09:47:41.486822 | orchestrator | Monday 10 February 2025 09:45:08 +0000 (0:00:11.232) 0:02:30.093 ******* 2025-02-10 09:47:41.486835 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:47:41.486849 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:47:41.486863 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:47:41.486876 | orchestrator | 2025-02-10 09:47:41.486890 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-02-10 09:47:41.486904 | orchestrator | Monday 10 February 2025 09:45:24 +0000 (0:00:16.449) 0:02:46.542 ******* 2025-02-10 09:47:41.486918 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:47:41.486931 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:47:41.486945 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:47:41.486959 | orchestrator | 2025-02-10 09:47:41.486973 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-02-10 09:47:41.486986 | orchestrator | Monday 10 February 2025 09:45:43 +0000 (0:00:18.946) 0:03:05.489 ******* 2025-02-10 09:47:41.487000 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:47:41.487014 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:47:41.487027 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:47:41.487041 | orchestrator | 2025-02-10 09:47:41.487055 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-02-10 09:47:41.487069 | orchestrator | Monday 10 February 2025 09:46:04 +0000 (0:00:20.713) 0:03:26.202 ******* 2025-02-10 09:47:41.487090 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:47:41.487104 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:47:41.487118 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:47:41.487131 | orchestrator | 2025-02-10 09:47:41.487145 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-02-10 09:47:41.487159 | orchestrator | Monday 10 February 2025 09:46:04 +0000 (0:00:00.372) 0:03:26.575 ******* 2025-02-10 09:47:41.487191 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-02-10 09:47:41.487205 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:47:41.487219 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-02-10 09:47:41.487233 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:47:41.487247 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-02-10 09:47:41.487261 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:47:41.487274 | orchestrator | 2025-02-10 09:47:41.487288 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-02-10 09:47:41.487303 | orchestrator | Monday 10 February 2025 09:46:09 +0000 (0:00:04.191) 0:03:30.766 ******* 2025-02-10 09:47:41.487325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-02-10 09:47:41.487351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-10 09:47:41.487383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-02-10 09:47:41.487408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-10 09:47:41.487446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-02-10 09:47:41.487463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-10 09:47:41.487493 | orchestrator | 2025-02-10 09:47:41.487507 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-02-10 09:47:41.487521 | orchestrator | Monday 10 February 2025 09:46:13 +0000 (0:00:04.707) 0:03:35.474 ******* 2025-02-10 09:47:41.487535 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:47:41.487549 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:47:41.487562 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:47:41.487576 | orchestrator | 2025-02-10 09:47:41.487589 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-02-10 09:47:41.487603 | orchestrator | Monday 10 February 2025 09:46:14 +0000 (0:00:00.538) 0:03:36.012 ******* 2025-02-10 09:47:41.487616 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:47:41.487630 | orchestrator | 2025-02-10 09:47:41.487644 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-02-10 09:47:41.487657 | orchestrator | Monday 10 February 2025 09:46:17 +0000 (0:00:02.674) 0:03:38.686 ******* 2025-02-10 09:47:41.487671 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:47:41.487685 | orchestrator | 2025-02-10 09:47:41.487698 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-02-10 09:47:41.487712 | orchestrator | Monday 10 February 2025 09:46:19 +0000 (0:00:02.814) 0:03:41.500 ******* 2025-02-10 09:47:41.487725 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:47:41.487739 | orchestrator | 2025-02-10 09:47:41.487753 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-02-10 09:47:41.487772 | orchestrator | Monday 10 February 2025 09:46:22 +0000 (0:00:02.699) 0:03:44.200 ******* 2025-02-10 09:47:41.487786 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:47:41.487800 | orchestrator | 2025-02-10 09:47:41.487814 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-02-10 09:47:41.487828 | orchestrator | Monday 10 February 2025 09:46:52 +0000 (0:00:29.546) 0:04:13.746 ******* 2025-02-10 09:47:41.487841 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:47:41.487855 | orchestrator | 2025-02-10 09:47:41.487868 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-02-10 09:47:41.487882 | orchestrator | Monday 10 February 2025 09:46:54 +0000 (0:00:02.386) 0:04:16.133 ******* 2025-02-10 09:47:41.487895 | orchestrator | 2025-02-10 09:47:41.487909 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-02-10 09:47:41.487923 | orchestrator | Monday 10 February 2025 09:46:54 +0000 (0:00:00.071) 0:04:16.204 ******* 2025-02-10 09:47:41.487936 | orchestrator | 2025-02-10 09:47:41.487950 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-02-10 09:47:41.487964 | orchestrator | Monday 10 February 2025 09:46:54 +0000 (0:00:00.072) 0:04:16.276 ******* 2025-02-10 09:47:41.487977 | orchestrator | 2025-02-10 09:47:41.487991 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-02-10 09:47:41.488004 | orchestrator | Monday 10 February 2025 09:46:54 +0000 (0:00:00.288) 0:04:16.564 ******* 2025-02-10 09:47:41.488018 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:47:41.488031 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:47:41.488051 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:47:41.488065 | orchestrator | 2025-02-10 09:47:41.488079 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:47:41.488099 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-02-10 09:47:44.525558 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-02-10 09:47:44.525702 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-02-10 09:47:44.525722 | orchestrator | 2025-02-10 09:47:44.525738 | orchestrator | 2025-02-10 09:47:44.525791 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:47:44.525807 | orchestrator | Monday 10 February 2025 09:47:39 +0000 (0:00:44.884) 0:05:01.449 ******* 2025-02-10 09:47:44.525821 | orchestrator | =============================================================================== 2025-02-10 09:47:44.525836 | orchestrator | glance : Restart glance-api container ---------------------------------- 44.88s 2025-02-10 09:47:44.525850 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 29.55s 2025-02-10 09:47:44.525864 | orchestrator | glance : Copying over glance-api.conf ---------------------------------- 21.21s 2025-02-10 09:47:44.525878 | orchestrator | glance : Copying over property-protections-rules.conf ------------------ 20.71s 2025-02-10 09:47:44.525892 | orchestrator | glance : Copying over glance-image-import.conf ------------------------- 18.95s 2025-02-10 09:47:44.525906 | orchestrator | glance : Ensuring glance service ceph config subdir exists ------------- 16.45s 2025-02-10 09:47:44.525920 | orchestrator | glance : Copying over glance-swift.conf for glance_api ----------------- 16.45s 2025-02-10 09:47:44.525933 | orchestrator | glance : Copying over config.json files for services ------------------- 15.43s 2025-02-10 09:47:44.525948 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ----- 12.39s 2025-02-10 09:47:44.525962 | orchestrator | glance : Copying over glance-cache.conf for glance_api ----------------- 11.23s 2025-02-10 09:47:44.525976 | orchestrator | glance : Creating TLS backend PEM File --------------------------------- 10.05s 2025-02-10 09:47:44.525990 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 9.94s 2025-02-10 09:47:44.526005 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 7.36s 2025-02-10 09:47:44.526082 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.19s 2025-02-10 09:47:44.526101 | orchestrator | glance : Ensuring config directories exist ------------------------------ 5.97s 2025-02-10 09:47:44.526115 | orchestrator | glance : Check glance containers ---------------------------------------- 4.71s 2025-02-10 09:47:44.526129 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.32s 2025-02-10 09:47:44.526160 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 4.19s 2025-02-10 09:47:44.526211 | orchestrator | service-ks-register : glance | Creating services ------------------------ 4.10s 2025-02-10 09:47:44.526237 | orchestrator | glance : Copy over multiple ceph configs for Glance --------------------- 4.08s 2025-02-10 09:47:44.526263 | orchestrator | 2025-02-10 09:47:41 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:47:44.526284 | orchestrator | 2025-02-10 09:47:41 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:44.526347 | orchestrator | 2025-02-10 09:47:44 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:47:47.568067 | orchestrator | 2025-02-10 09:47:44 | INFO  | Task cd7a7c31-213e-4279-a14c-158f0a11d104 is in state STARTED 2025-02-10 09:47:47.568237 | orchestrator | 2025-02-10 09:47:44 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:47:47.568259 | orchestrator | 2025-02-10 09:47:44 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:47.568294 | orchestrator | 2025-02-10 09:47:47 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:47:50.614687 | orchestrator | 2025-02-10 09:47:47 | INFO  | Task cd7a7c31-213e-4279-a14c-158f0a11d104 is in state STARTED 2025-02-10 09:47:50.614810 | orchestrator | 2025-02-10 09:47:47 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:47:50.614824 | orchestrator | 2025-02-10 09:47:47 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:50.614851 | orchestrator | 2025-02-10 09:47:50 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:47:50.615327 | orchestrator | 2025-02-10 09:47:50 | INFO  | Task cd7a7c31-213e-4279-a14c-158f0a11d104 is in state STARTED 2025-02-10 09:47:50.615375 | orchestrator | 2025-02-10 09:47:50 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:47:53.654984 | orchestrator | 2025-02-10 09:47:50 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:53.655146 | orchestrator | 2025-02-10 09:47:53 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:47:53.655558 | orchestrator | 2025-02-10 09:47:53 | INFO  | Task cd7a7c31-213e-4279-a14c-158f0a11d104 is in state STARTED 2025-02-10 09:47:53.655588 | orchestrator | 2025-02-10 09:47:53 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:47:56.695592 | orchestrator | 2025-02-10 09:47:53 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:56.695745 | orchestrator | 2025-02-10 09:47:56 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:47:56.696684 | orchestrator | 2025-02-10 09:47:56 | INFO  | Task cd7a7c31-213e-4279-a14c-158f0a11d104 is in state STARTED 2025-02-10 09:47:56.699253 | orchestrator | 2025-02-10 09:47:56 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:47:59.748343 | orchestrator | 2025-02-10 09:47:56 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:59.748507 | orchestrator | 2025-02-10 09:47:59 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:47:59.755300 | orchestrator | 2025-02-10 09:47:59 | INFO  | Task cd7a7c31-213e-4279-a14c-158f0a11d104 is in state STARTED 2025-02-10 09:47:59.759760 | orchestrator | 2025-02-10 09:47:59 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:48:02.795266 | orchestrator | 2025-02-10 09:47:59 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:48:02.795522 | orchestrator | 2025-02-10 09:48:02 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:48:02.796366 | orchestrator | 2025-02-10 09:48:02 | INFO  | Task cd7a7c31-213e-4279-a14c-158f0a11d104 is in state STARTED 2025-02-10 09:48:02.796402 | orchestrator | 2025-02-10 09:48:02 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:48:05.833455 | orchestrator | 2025-02-10 09:48:02 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:48:05.833653 | orchestrator | 2025-02-10 09:48:05 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:48:05.834103 | orchestrator | 2025-02-10 09:48:05 | INFO  | Task cd7a7c31-213e-4279-a14c-158f0a11d104 is in state STARTED 2025-02-10 09:48:05.835874 | orchestrator | 2025-02-10 09:48:05 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:48:05.836532 | orchestrator | 2025-02-10 09:48:05 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:48:08.882332 | orchestrator | 2025-02-10 09:48:08 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:48:08.886714 | orchestrator | 2025-02-10 09:48:08 | INFO  | Task cd7a7c31-213e-4279-a14c-158f0a11d104 is in state STARTED 2025-02-10 09:48:08.890430 | orchestrator | 2025-02-10 09:48:08 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:48:11.961448 | orchestrator | 2025-02-10 09:48:08 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:48:11.961612 | orchestrator | 2025-02-10 09:48:11 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:48:11.961932 | orchestrator | 2025-02-10 09:48:11 | INFO  | Task cd7a7c31-213e-4279-a14c-158f0a11d104 is in state STARTED 2025-02-10 09:48:11.962009 | orchestrator | 2025-02-10 09:48:11 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:48:14.998702 | orchestrator | 2025-02-10 09:48:11 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:48:14.998864 | orchestrator | 2025-02-10 09:48:14 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:48:14.999914 | orchestrator | 2025-02-10 09:48:14 | INFO  | Task cd7a7c31-213e-4279-a14c-158f0a11d104 is in state STARTED 2025-02-10 09:48:18.051385 | orchestrator | 2025-02-10 09:48:14 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:48:18.051520 | orchestrator | 2025-02-10 09:48:14 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:48:18.051559 | orchestrator | 2025-02-10 09:48:18 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:48:18.052783 | orchestrator | 2025-02-10 09:48:18 | INFO  | Task cd7a7c31-213e-4279-a14c-158f0a11d104 is in state STARTED 2025-02-10 09:48:18.054612 | orchestrator | 2025-02-10 09:48:18 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:48:21.109155 | orchestrator | 2025-02-10 09:48:18 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:48:21.109369 | orchestrator | 2025-02-10 09:48:21 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:48:21.111745 | orchestrator | 2025-02-10 09:48:21 | INFO  | Task cd7a7c31-213e-4279-a14c-158f0a11d104 is in state STARTED 2025-02-10 09:48:21.115456 | orchestrator | 2025-02-10 09:48:21 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:48:24.157648 | orchestrator | 2025-02-10 09:48:21 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:48:24.157809 | orchestrator | 2025-02-10 09:48:24 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:48:24.158632 | orchestrator | 2025-02-10 09:48:24 | INFO  | Task cd7a7c31-213e-4279-a14c-158f0a11d104 is in state STARTED 2025-02-10 09:48:24.160481 | orchestrator | 2025-02-10 09:48:24 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:48:27.197477 | orchestrator | 2025-02-10 09:48:24 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:48:27.197722 | orchestrator | 2025-02-10 09:48:27 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:48:30.230378 | orchestrator | 2025-02-10 09:48:27 | INFO  | Task cd7a7c31-213e-4279-a14c-158f0a11d104 is in state STARTED 2025-02-10 09:48:30.230526 | orchestrator | 2025-02-10 09:48:27 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:48:30.230549 | orchestrator | 2025-02-10 09:48:27 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:48:30.230587 | orchestrator | 2025-02-10 09:48:30 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:48:30.231032 | orchestrator | 2025-02-10 09:48:30 | INFO  | Task cd7a7c31-213e-4279-a14c-158f0a11d104 is in state STARTED 2025-02-10 09:48:30.232850 | orchestrator | 2025-02-10 09:48:30 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:48:33.263924 | orchestrator | 2025-02-10 09:48:30 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:48:33.264117 | orchestrator | 2025-02-10 09:48:33 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:48:36.294861 | orchestrator | 2025-02-10 09:48:33 | INFO  | Task cd7a7c31-213e-4279-a14c-158f0a11d104 is in state STARTED 2025-02-10 09:48:36.294992 | orchestrator | 2025-02-10 09:48:33 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:48:36.295046 | orchestrator | 2025-02-10 09:48:33 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:48:36.295075 | orchestrator | 2025-02-10 09:48:36 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:48:36.296744 | orchestrator | 2025-02-10 09:48:36 | INFO  | Task cd7a7c31-213e-4279-a14c-158f0a11d104 is in state STARTED 2025-02-10 09:48:36.296794 | orchestrator | 2025-02-10 09:48:36 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:48:39.345648 | orchestrator | 2025-02-10 09:48:36 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:48:39.345816 | orchestrator | 2025-02-10 09:48:39 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:48:39.346316 | orchestrator | 2025-02-10 09:48:39 | INFO  | Task cd7a7c31-213e-4279-a14c-158f0a11d104 is in state STARTED 2025-02-10 09:48:39.348063 | orchestrator | 2025-02-10 09:48:39 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:48:42.424382 | orchestrator | 2025-02-10 09:48:39 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:48:42.424565 | orchestrator | 2025-02-10 09:48:42 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:48:45.513688 | orchestrator | 2025-02-10 09:48:42 | INFO  | Task cd7a7c31-213e-4279-a14c-158f0a11d104 is in state STARTED 2025-02-10 09:48:45.513832 | orchestrator | 2025-02-10 09:48:42 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:48:45.513852 | orchestrator | 2025-02-10 09:48:42 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:48:45.514076 | orchestrator | 2025-02-10 09:48:45 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:48:48.568789 | orchestrator | 2025-02-10 09:48:45 | INFO  | Task cd7a7c31-213e-4279-a14c-158f0a11d104 is in state STARTED 2025-02-10 09:48:48.568923 | orchestrator | 2025-02-10 09:48:45 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:48:48.568962 | orchestrator | 2025-02-10 09:48:45 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:48:48.568991 | orchestrator | 2025-02-10 09:48:48 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:48:48.569563 | orchestrator | 2025-02-10 09:48:48 | INFO  | Task cd7a7c31-213e-4279-a14c-158f0a11d104 is in state STARTED 2025-02-10 09:48:48.569597 | orchestrator | 2025-02-10 09:48:48 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:48:51.604889 | orchestrator | 2025-02-10 09:48:48 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:48:51.605026 | orchestrator | 2025-02-10 09:48:51 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:48:51.605048 | orchestrator | 2025-02-10 09:48:51 | INFO  | Task cd7a7c31-213e-4279-a14c-158f0a11d104 is in state STARTED 2025-02-10 09:48:51.605062 | orchestrator | 2025-02-10 09:48:51 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:48:51.605080 | orchestrator | 2025-02-10 09:48:51 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:48:54.645537 | orchestrator | 2025-02-10 09:48:54 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:48:54.648647 | orchestrator | 2025-02-10 09:48:54 | INFO  | Task cd7a7c31-213e-4279-a14c-158f0a11d104 is in state STARTED 2025-02-10 09:48:54.649844 | orchestrator | 2025-02-10 09:48:54 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:48:57.694990 | orchestrator | 2025-02-10 09:48:54 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:48:57.695156 | orchestrator | 2025-02-10 09:48:57 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:48:57.696294 | orchestrator | 2025-02-10 09:48:57 | INFO  | Task cd7a7c31-213e-4279-a14c-158f0a11d104 is in state STARTED 2025-02-10 09:48:57.697987 | orchestrator | 2025-02-10 09:48:57 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:49:00.743934 | orchestrator | 2025-02-10 09:48:57 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:49:00.744102 | orchestrator | 2025-02-10 09:49:00 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:49:00.744431 | orchestrator | 2025-02-10 09:49:00 | INFO  | Task cd7a7c31-213e-4279-a14c-158f0a11d104 is in state STARTED 2025-02-10 09:49:00.745283 | orchestrator | 2025-02-10 09:49:00 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:49:03.787692 | orchestrator | 2025-02-10 09:49:00 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:49:03.787817 | orchestrator | 2025-02-10 09:49:03 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:49:03.788155 | orchestrator | 2025-02-10 09:49:03 | INFO  | Task cd7a7c31-213e-4279-a14c-158f0a11d104 is in state STARTED 2025-02-10 09:49:03.788175 | orchestrator | 2025-02-10 09:49:03 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:49:06.827963 | orchestrator | 2025-02-10 09:49:03 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:49:06.828081 | orchestrator | 2025-02-10 09:49:06 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:49:06.829474 | orchestrator | 2025-02-10 09:49:06 | INFO  | Task cd7a7c31-213e-4279-a14c-158f0a11d104 is in state STARTED 2025-02-10 09:49:06.829507 | orchestrator | 2025-02-10 09:49:06 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:49:09.874100 | orchestrator | 2025-02-10 09:49:06 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:49:09.874299 | orchestrator | 2025-02-10 09:49:09 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:49:09.875151 | orchestrator | 2025-02-10 09:49:09 | INFO  | Task cd7a7c31-213e-4279-a14c-158f0a11d104 is in state STARTED 2025-02-10 09:49:09.876968 | orchestrator | 2025-02-10 09:49:09 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:49:12.916713 | orchestrator | 2025-02-10 09:49:09 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:49:12.916880 | orchestrator | 2025-02-10 09:49:12 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:49:12.917415 | orchestrator | 2025-02-10 09:49:12 | INFO  | Task cd7a7c31-213e-4279-a14c-158f0a11d104 is in state STARTED 2025-02-10 09:49:12.917452 | orchestrator | 2025-02-10 09:49:12 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:49:15.960482 | orchestrator | 2025-02-10 09:49:12 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:49:15.960621 | orchestrator | 2025-02-10 09:49:15 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:49:18.991280 | orchestrator | 2025-02-10 09:49:15 | INFO  | Task cd7a7c31-213e-4279-a14c-158f0a11d104 is in state STARTED 2025-02-10 09:49:18.991521 | orchestrator | 2025-02-10 09:49:15 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:49:18.991549 | orchestrator | 2025-02-10 09:49:15 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:49:18.991621 | orchestrator | 2025-02-10 09:49:18 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:49:18.993657 | orchestrator | 2025-02-10 09:49:18 | INFO  | Task cd7a7c31-213e-4279-a14c-158f0a11d104 is in state STARTED 2025-02-10 09:49:18.993692 | orchestrator | 2025-02-10 09:49:18 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:49:18.993714 | orchestrator | 2025-02-10 09:49:18 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:49:22.041513 | orchestrator | 2025-02-10 09:49:22 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:49:22.043587 | orchestrator | 2025-02-10 09:49:22 | INFO  | Task cd7a7c31-213e-4279-a14c-158f0a11d104 is in state STARTED 2025-02-10 09:49:25.088077 | orchestrator | 2025-02-10 09:49:22 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:49:25.088264 | orchestrator | 2025-02-10 09:49:22 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:49:25.088305 | orchestrator | 2025-02-10 09:49:25 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:49:25.088985 | orchestrator | 2025-02-10 09:49:25 | INFO  | Task cd7a7c31-213e-4279-a14c-158f0a11d104 is in state SUCCESS 2025-02-10 09:49:25.089142 | orchestrator | 2025-02-10 09:49:25.090975 | orchestrator | 2025-02-10 09:49:25.091036 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:49:25.091062 | orchestrator | 2025-02-10 09:49:25.091079 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:49:25.091093 | orchestrator | Monday 10 February 2025 09:47:32 +0000 (0:00:00.408) 0:00:00.408 ******* 2025-02-10 09:49:25.091107 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:49:25.091124 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:49:25.091138 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:49:25.091152 | orchestrator | 2025-02-10 09:49:25.091166 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:49:25.091180 | orchestrator | Monday 10 February 2025 09:47:32 +0000 (0:00:00.481) 0:00:00.890 ******* 2025-02-10 09:49:25.091257 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-02-10 09:49:25.091293 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-02-10 09:49:25.091308 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-02-10 09:49:25.091322 | orchestrator | 2025-02-10 09:49:25.091336 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-02-10 09:49:25.091350 | orchestrator | 2025-02-10 09:49:25.091486 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-02-10 09:49:25.091508 | orchestrator | Monday 10 February 2025 09:47:33 +0000 (0:00:00.451) 0:00:01.341 ******* 2025-02-10 09:49:25.091523 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:49:25.092024 | orchestrator | 2025-02-10 09:49:25.092047 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-02-10 09:49:25.092062 | orchestrator | Monday 10 February 2025 09:47:34 +0000 (0:00:00.763) 0:00:02.105 ******* 2025-02-10 09:49:25.092394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-10 09:49:25.092449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-10 09:49:25.092468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-10 09:49:25.092482 | orchestrator | 2025-02-10 09:49:25.092496 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-02-10 09:49:25.092510 | orchestrator | Monday 10 February 2025 09:47:35 +0000 (0:00:00.924) 0:00:03.030 ******* 2025-02-10 09:49:25.092525 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-02-10 09:49:25.092540 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-02-10 09:49:25.092554 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-10 09:49:25.092568 | orchestrator | 2025-02-10 09:49:25.092582 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-02-10 09:49:25.092596 | orchestrator | Monday 10 February 2025 09:47:35 +0000 (0:00:00.679) 0:00:03.709 ******* 2025-02-10 09:49:25.092610 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:49:25.092624 | orchestrator | 2025-02-10 09:49:25.092638 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-02-10 09:49:25.092692 | orchestrator | Monday 10 February 2025 09:47:36 +0000 (0:00:00.811) 0:00:04.521 ******* 2025-02-10 09:49:25.092711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-10 09:49:25.092727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-10 09:49:25.092752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-10 09:49:25.092767 | orchestrator | 2025-02-10 09:49:25.092781 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-02-10 09:49:25.092795 | orchestrator | Monday 10 February 2025 09:47:38 +0000 (0:00:01.960) 0:00:06.481 ******* 2025-02-10 09:49:25.092810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-02-10 09:49:25.092824 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:49:25.092839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-02-10 09:49:25.092853 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:49:25.092900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-02-10 09:49:25.092917 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:49:25.092931 | orchestrator | 2025-02-10 09:49:25.092945 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-02-10 09:49:25.092959 | orchestrator | Monday 10 February 2025 09:47:39 +0000 (0:00:00.996) 0:00:07.478 ******* 2025-02-10 09:49:25.092973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-02-10 09:49:25.092996 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:49:25.093012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-02-10 09:49:25.093029 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:49:25.093046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-02-10 09:49:25.093063 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:49:25.093079 | orchestrator | 2025-02-10 09:49:25.093097 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-02-10 09:49:25.093120 | orchestrator | Monday 10 February 2025 09:47:40 +0000 (0:00:01.112) 0:00:08.591 ******* 2025-02-10 09:49:25.093223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-10 09:49:25.093317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-10 09:49:25.093351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-10 09:49:25.093390 | orchestrator | 2025-02-10 09:49:25.093414 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-02-10 09:49:25.093431 | orchestrator | Monday 10 February 2025 09:47:41 +0000 (0:00:01.371) 0:00:09.962 ******* 2025-02-10 09:49:25.093445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-10 09:49:25.093460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-10 09:49:25.093475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-10 09:49:25.093489 | orchestrator | 2025-02-10 09:49:25.093504 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-02-10 09:49:25.093518 | orchestrator | Monday 10 February 2025 09:47:44 +0000 (0:00:02.197) 0:00:12.160 ******* 2025-02-10 09:49:25.093531 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:49:25.093546 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:49:25.093560 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:49:25.093573 | orchestrator | 2025-02-10 09:49:25.093587 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-02-10 09:49:25.093601 | orchestrator | Monday 10 February 2025 09:47:44 +0000 (0:00:00.309) 0:00:12.469 ******* 2025-02-10 09:49:25.093615 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-02-10 09:49:25.093630 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-02-10 09:49:25.093644 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-02-10 09:49:25.093657 | orchestrator | 2025-02-10 09:49:25.093679 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-02-10 09:49:25.093694 | orchestrator | Monday 10 February 2025 09:47:45 +0000 (0:00:01.444) 0:00:13.913 ******* 2025-02-10 09:49:25.093747 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-02-10 09:49:25.093772 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-02-10 09:49:25.093787 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-02-10 09:49:25.093801 | orchestrator | 2025-02-10 09:49:25.093815 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-02-10 09:49:25.093829 | orchestrator | Monday 10 February 2025 09:47:47 +0000 (0:00:01.630) 0:00:15.543 ******* 2025-02-10 09:49:25.093843 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-10 09:49:25.093857 | orchestrator | 2025-02-10 09:49:25.093870 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-02-10 09:49:25.093884 | orchestrator | Monday 10 February 2025 09:47:48 +0000 (0:00:01.336) 0:00:16.880 ******* 2025-02-10 09:49:25.093898 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-02-10 09:49:25.093912 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-02-10 09:49:25.093925 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:49:25.093940 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:49:25.093954 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:49:25.093968 | orchestrator | 2025-02-10 09:49:25.093981 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-02-10 09:49:25.093998 | orchestrator | Monday 10 February 2025 09:47:50 +0000 (0:00:01.355) 0:00:18.235 ******* 2025-02-10 09:49:25.094090 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:49:25.094108 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:49:25.094122 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:49:25.094136 | orchestrator | 2025-02-10 09:49:25.094150 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-02-10 09:49:25.094165 | orchestrator | Monday 10 February 2025 09:47:50 +0000 (0:00:00.752) 0:00:18.988 ******* 2025-02-10 09:49:25.094180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1071923, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3396413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.094247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1071923, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3396413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.094264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1071923, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3396413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.094280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1071899, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.325641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.094347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1071899, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.325641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.094364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1071899, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.325641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.094379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1071891, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.323641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.094400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1071891, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.323641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.094420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1071891, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.323641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.094436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1071903, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.327641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.094493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1071903, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.327641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.094510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1071903, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.327641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.094524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1071883, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3196409, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.094564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1071883, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3196409, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.094591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1071883, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3196409, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.094614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1071893, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.323641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.094637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1071893, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.323641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.094660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1071893, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.323641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.094675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1071902, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.326641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.094708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1071902, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.326641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.094723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1071902, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.326641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.094737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1071881, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3176408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.094759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1071881, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3176408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.094784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1071881, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3176408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.094800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1071873, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3066406, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.094824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1071873, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3066406, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.094839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1071873, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3066406, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.094853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1071884, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.320641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.094875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1071884, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.320641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.094897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1071884, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.320641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.094922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1071876, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3096406, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.094937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1071876, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3096406, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.094961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1071876, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3096406, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.094985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1071901, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1739177125.326641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1071901, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1739177125.326641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1071901, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1739177125.326641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1071886, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1739177125.320641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1071886, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1739177125.320641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1071886, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1739177125.320641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1071907, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3326411, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1071907, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3326411, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1071907, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3326411, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1071878, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3136408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1071878, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3136408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1071878, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3136408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1071895, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.324641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1071895, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.324641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1071895, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.324641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1071874, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3076406, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1071874, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3076406, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1071874, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3076406, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1071877, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3126407, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1071877, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3126407, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1071887, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.322641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1071877, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3126407, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1071887, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.322641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1072055, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4026427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1071887, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.322641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1072055, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4026427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1072039, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3816423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1072055, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4026427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1072039, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3816423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1072140, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.415643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1072039, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3816423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1072140, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.415643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1071936, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3426414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1071936, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3426414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1072140, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.415643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1072144, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.417643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1072144, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.417643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1071936, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3426414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1072090, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4056427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1072090, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4056427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1072144, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.417643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1072100, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.406643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1072100, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.406643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1072090, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4056427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.095989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1071939, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3436415, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.096004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1071939, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3436415, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.096019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1072100, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.406643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.096050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1072054, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3816423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.096066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1072054, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3816423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.096087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1071939, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3436415, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.096102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1072145, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.418643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.096117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1072145, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.418643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.096131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1072054, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3816423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.096163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1072103, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1739177125.413643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.096179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1072103, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1739177125.413643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.096229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1072145, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.418643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.096245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1071952, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3476415, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.096259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1071952, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3476415, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.096283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1072103, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1739177125.413643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.096306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1071947, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3456414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.096321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1071947, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3456414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.096343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1071952, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3476415, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.096357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1071960, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3496416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.096371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1071960, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3496416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.096395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1071947, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3456414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.096417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1071966, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3736422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.096432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1071966, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3736422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.096457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1071960, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3496416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.096471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1072147, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4196432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.096485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1072147, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4196432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.096499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1071966, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.3736422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.096523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1072147, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1739177125.4196432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:49:25.096537 | orchestrator | 2025-02-10 09:49:25.096552 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-02-10 09:49:25.096571 | orchestrator | Monday 10 February 2025 09:48:28 +0000 (0:00:37.325) 0:00:56.314 ******* 2025-02-10 09:49:25.096586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-10 09:49:25.096608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-10 09:49:25.096623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-10 09:49:25.096637 | orchestrator | 2025-02-10 09:49:25.096651 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-02-10 09:49:25.096665 | orchestrator | Monday 10 February 2025 09:48:29 +0000 (0:00:01.111) 0:00:57.426 ******* 2025-02-10 09:49:25.096679 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:49:25.096693 | orchestrator | 2025-02-10 09:49:25.096707 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-02-10 09:49:25.096721 | orchestrator | Monday 10 February 2025 09:48:31 +0000 (0:00:02.318) 0:00:59.745 ******* 2025-02-10 09:49:25.096735 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:49:25.096749 | orchestrator | 2025-02-10 09:49:25.096762 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-02-10 09:49:25.096777 | orchestrator | Monday 10 February 2025 09:48:33 +0000 (0:00:02.169) 0:01:01.914 ******* 2025-02-10 09:49:25.096790 | orchestrator | 2025-02-10 09:49:25.096805 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-02-10 09:49:25.096820 | orchestrator | Monday 10 February 2025 09:48:33 +0000 (0:00:00.063) 0:01:01.977 ******* 2025-02-10 09:49:25.096844 | orchestrator | 2025-02-10 09:49:25.096868 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-02-10 09:49:25.096892 | orchestrator | Monday 10 February 2025 09:48:33 +0000 (0:00:00.044) 0:01:02.022 ******* 2025-02-10 09:49:25.096907 | orchestrator | 2025-02-10 09:49:25.096921 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-02-10 09:49:25.096934 | orchestrator | Monday 10 February 2025 09:48:34 +0000 (0:00:00.140) 0:01:02.162 ******* 2025-02-10 09:49:25.096948 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:49:25.096970 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:49:25.096993 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:49:25.097017 | orchestrator | 2025-02-10 09:49:25.097040 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-02-10 09:49:25.097062 | orchestrator | Monday 10 February 2025 09:48:36 +0000 (0:00:01.891) 0:01:04.054 ******* 2025-02-10 09:49:25.097076 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:49:25.097089 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:49:25.097114 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-02-10 09:49:25.097128 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-02-10 09:49:25.097142 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:49:25.097156 | orchestrator | 2025-02-10 09:49:25.097170 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-02-10 09:49:25.097223 | orchestrator | Monday 10 February 2025 09:49:03 +0000 (0:00:27.912) 0:01:31.966 ******* 2025-02-10 09:49:25.097240 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:49:25.097254 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:49:25.097269 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:49:25.097282 | orchestrator | 2025-02-10 09:49:25.097312 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-02-10 09:49:28.148646 | orchestrator | Monday 10 February 2025 09:49:16 +0000 (0:00:12.568) 0:01:44.535 ******* 2025-02-10 09:49:28.148785 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:49:28.148806 | orchestrator | 2025-02-10 09:49:28.148843 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-02-10 09:49:28.148857 | orchestrator | Monday 10 February 2025 09:49:18 +0000 (0:00:02.183) 0:01:46.718 ******* 2025-02-10 09:49:28.148870 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:49:28.148884 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:49:28.148897 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:49:28.148910 | orchestrator | 2025-02-10 09:49:28.148922 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-02-10 09:49:28.148935 | orchestrator | Monday 10 February 2025 09:49:19 +0000 (0:00:00.691) 0:01:47.410 ******* 2025-02-10 09:49:28.148949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-02-10 09:49:28.148967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-02-10 09:49:28.148984 | orchestrator | 2025-02-10 09:49:28.148996 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-02-10 09:49:28.149009 | orchestrator | Monday 10 February 2025 09:49:22 +0000 (0:00:02.863) 0:01:50.273 ******* 2025-02-10 09:49:28.149021 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:49:28.149034 | orchestrator | 2025-02-10 09:49:28.149046 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:49:28.149059 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-02-10 09:49:28.149074 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-02-10 09:49:28.149087 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-02-10 09:49:28.149099 | orchestrator | 2025-02-10 09:49:28.149112 | orchestrator | 2025-02-10 09:49:28.149124 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:49:28.149137 | orchestrator | Monday 10 February 2025 09:49:22 +0000 (0:00:00.527) 0:01:50.801 ******* 2025-02-10 09:49:28.149151 | orchestrator | =============================================================================== 2025-02-10 09:49:28.149164 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.33s 2025-02-10 09:49:28.149178 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 27.91s 2025-02-10 09:49:28.149245 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 12.57s 2025-02-10 09:49:28.149260 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.86s 2025-02-10 09:49:28.149273 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.32s 2025-02-10 09:49:28.149287 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 2.20s 2025-02-10 09:49:28.149300 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.18s 2025-02-10 09:49:28.149314 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.17s 2025-02-10 09:49:28.149327 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.96s 2025-02-10 09:49:28.149341 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.89s 2025-02-10 09:49:28.149355 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.63s 2025-02-10 09:49:28.149369 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.44s 2025-02-10 09:49:28.149382 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.37s 2025-02-10 09:49:28.149396 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 1.36s 2025-02-10 09:49:28.149410 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 1.34s 2025-02-10 09:49:28.149423 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 1.11s 2025-02-10 09:49:28.149436 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.11s 2025-02-10 09:49:28.149449 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 1.00s 2025-02-10 09:49:28.149464 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.92s 2025-02-10 09:49:28.149478 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.81s 2025-02-10 09:49:28.149492 | orchestrator | 2025-02-10 09:49:25 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:49:28.149506 | orchestrator | 2025-02-10 09:49:25 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:49:28.149537 | orchestrator | 2025-02-10 09:49:28 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:49:28.155718 | orchestrator | 2025-02-10 09:49:28 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:49:31.195087 | orchestrator | 2025-02-10 09:49:28 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:49:31.195293 | orchestrator | 2025-02-10 09:49:31 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:49:34.244472 | orchestrator | 2025-02-10 09:49:31 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:49:34.244752 | orchestrator | 2025-02-10 09:49:31 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:49:34.244828 | orchestrator | 2025-02-10 09:49:34 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:49:37.296250 | orchestrator | 2025-02-10 09:49:34 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:49:37.296391 | orchestrator | 2025-02-10 09:49:34 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:49:37.296428 | orchestrator | 2025-02-10 09:49:37 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:49:37.296604 | orchestrator | 2025-02-10 09:49:37 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:49:40.347549 | orchestrator | 2025-02-10 09:49:37 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:49:40.347715 | orchestrator | 2025-02-10 09:49:40 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:49:40.351290 | orchestrator | 2025-02-10 09:49:40 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:49:43.389635 | orchestrator | 2025-02-10 09:49:40 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:49:43.389939 | orchestrator | 2025-02-10 09:49:43 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:49:46.420438 | orchestrator | 2025-02-10 09:49:43 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:49:46.420555 | orchestrator | 2025-02-10 09:49:43 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:49:46.420584 | orchestrator | 2025-02-10 09:49:46 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:49:46.420969 | orchestrator | 2025-02-10 09:49:46 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:49:49.467458 | orchestrator | 2025-02-10 09:49:46 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:49:49.467874 | orchestrator | 2025-02-10 09:49:49 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:49:52.508994 | orchestrator | 2025-02-10 09:49:49 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:49:52.509334 | orchestrator | 2025-02-10 09:49:49 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:49:52.509397 | orchestrator | 2025-02-10 09:49:52 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:49:55.552022 | orchestrator | 2025-02-10 09:49:52 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:49:55.552205 | orchestrator | 2025-02-10 09:49:52 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:49:55.552248 | orchestrator | 2025-02-10 09:49:55 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:49:58.603040 | orchestrator | 2025-02-10 09:49:55 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:49:58.603251 | orchestrator | 2025-02-10 09:49:55 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:49:58.603293 | orchestrator | 2025-02-10 09:49:58 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:49:58.604556 | orchestrator | 2025-02-10 09:49:58 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:50:01.650359 | orchestrator | 2025-02-10 09:49:58 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:01.650535 | orchestrator | 2025-02-10 09:50:01 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:50:01.651109 | orchestrator | 2025-02-10 09:50:01 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:50:04.700097 | orchestrator | 2025-02-10 09:50:01 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:04.700282 | orchestrator | 2025-02-10 09:50:04 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:50:04.700818 | orchestrator | 2025-02-10 09:50:04 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:50:07.730745 | orchestrator | 2025-02-10 09:50:04 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:07.730906 | orchestrator | 2025-02-10 09:50:07 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:50:10.766448 | orchestrator | 2025-02-10 09:50:07 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:50:10.766625 | orchestrator | 2025-02-10 09:50:07 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:10.766734 | orchestrator | 2025-02-10 09:50:10 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:50:13.815398 | orchestrator | 2025-02-10 09:50:10 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:50:13.815541 | orchestrator | 2025-02-10 09:50:10 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:13.815582 | orchestrator | 2025-02-10 09:50:13 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:50:13.817110 | orchestrator | 2025-02-10 09:50:13 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:50:16.869809 | orchestrator | 2025-02-10 09:50:13 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:16.870173 | orchestrator | 2025-02-10 09:50:16 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:50:16.870215 | orchestrator | 2025-02-10 09:50:16 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:50:19.912323 | orchestrator | 2025-02-10 09:50:16 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:19.912483 | orchestrator | 2025-02-10 09:50:19 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:50:19.912938 | orchestrator | 2025-02-10 09:50:19 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:50:19.913344 | orchestrator | 2025-02-10 09:50:19 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:22.953536 | orchestrator | 2025-02-10 09:50:22 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:50:22.953881 | orchestrator | 2025-02-10 09:50:22 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:50:22.954065 | orchestrator | 2025-02-10 09:50:22 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:26.024701 | orchestrator | 2025-02-10 09:50:26 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:50:29.076614 | orchestrator | 2025-02-10 09:50:26 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:50:29.076731 | orchestrator | 2025-02-10 09:50:26 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:29.076760 | orchestrator | 2025-02-10 09:50:29 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:50:32.120066 | orchestrator | 2025-02-10 09:50:29 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:50:32.120256 | orchestrator | 2025-02-10 09:50:29 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:32.120291 | orchestrator | 2025-02-10 09:50:32 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:50:35.164736 | orchestrator | 2025-02-10 09:50:32 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:50:35.164979 | orchestrator | 2025-02-10 09:50:32 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:35.165024 | orchestrator | 2025-02-10 09:50:35 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:50:38.209047 | orchestrator | 2025-02-10 09:50:35 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:50:38.209194 | orchestrator | 2025-02-10 09:50:35 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:38.209224 | orchestrator | 2025-02-10 09:50:38 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:50:38.214656 | orchestrator | 2025-02-10 09:50:38 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:50:41.265406 | orchestrator | 2025-02-10 09:50:38 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:41.265550 | orchestrator | 2025-02-10 09:50:41 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:50:41.272717 | orchestrator | 2025-02-10 09:50:41 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:50:44.344857 | orchestrator | 2025-02-10 09:50:41 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:44.345024 | orchestrator | 2025-02-10 09:50:44 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:50:44.347815 | orchestrator | 2025-02-10 09:50:44 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:50:44.347938 | orchestrator | 2025-02-10 09:50:44 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:47.403059 | orchestrator | 2025-02-10 09:50:47 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:50:47.403358 | orchestrator | 2025-02-10 09:50:47 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:50:47.403389 | orchestrator | 2025-02-10 09:50:47 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:50.458817 | orchestrator | 2025-02-10 09:50:50 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:50:50.459714 | orchestrator | 2025-02-10 09:50:50 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:50:53.499904 | orchestrator | 2025-02-10 09:50:50 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:53.500034 | orchestrator | 2025-02-10 09:50:53 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:50:56.529302 | orchestrator | 2025-02-10 09:50:53 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:50:56.529443 | orchestrator | 2025-02-10 09:50:53 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:56.529481 | orchestrator | 2025-02-10 09:50:56 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:50:56.529618 | orchestrator | 2025-02-10 09:50:56 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:50:59.575399 | orchestrator | 2025-02-10 09:50:56 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:59.575558 | orchestrator | 2025-02-10 09:50:59 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:51:02.607049 | orchestrator | 2025-02-10 09:50:59 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:51:02.607245 | orchestrator | 2025-02-10 09:50:59 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:51:02.607289 | orchestrator | 2025-02-10 09:51:02 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:51:05.646546 | orchestrator | 2025-02-10 09:51:02 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:51:05.646673 | orchestrator | 2025-02-10 09:51:02 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:51:05.646714 | orchestrator | 2025-02-10 09:51:05 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:51:05.649891 | orchestrator | 2025-02-10 09:51:05 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:51:08.689930 | orchestrator | 2025-02-10 09:51:05 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:51:08.690183 | orchestrator | 2025-02-10 09:51:08 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:51:08.690924 | orchestrator | 2025-02-10 09:51:08 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:51:11.737921 | orchestrator | 2025-02-10 09:51:08 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:51:11.738180 | orchestrator | 2025-02-10 09:51:11 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:51:11.738737 | orchestrator | 2025-02-10 09:51:11 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:51:11.739093 | orchestrator | 2025-02-10 09:51:11 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:51:14.791775 | orchestrator | 2025-02-10 09:51:14 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:51:14.794188 | orchestrator | 2025-02-10 09:51:14 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:51:17.861289 | orchestrator | 2025-02-10 09:51:14 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:51:17.861463 | orchestrator | 2025-02-10 09:51:17 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:51:17.862655 | orchestrator | 2025-02-10 09:51:17 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:51:20.920990 | orchestrator | 2025-02-10 09:51:17 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:51:20.921178 | orchestrator | 2025-02-10 09:51:20 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:51:20.921739 | orchestrator | 2025-02-10 09:51:20 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:51:23.974855 | orchestrator | 2025-02-10 09:51:20 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:51:23.975043 | orchestrator | 2025-02-10 09:51:23 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:51:23.975728 | orchestrator | 2025-02-10 09:51:23 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:51:27.024208 | orchestrator | 2025-02-10 09:51:23 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:51:27.024460 | orchestrator | 2025-02-10 09:51:27 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:51:30.071035 | orchestrator | 2025-02-10 09:51:27 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:51:30.071265 | orchestrator | 2025-02-10 09:51:27 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:51:30.071304 | orchestrator | 2025-02-10 09:51:30 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:51:30.073458 | orchestrator | 2025-02-10 09:51:30 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:51:33.108976 | orchestrator | 2025-02-10 09:51:30 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:51:33.109169 | orchestrator | 2025-02-10 09:51:33 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:51:36.160343 | orchestrator | 2025-02-10 09:51:33 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:51:36.160492 | orchestrator | 2025-02-10 09:51:33 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:51:36.160532 | orchestrator | 2025-02-10 09:51:36 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:51:36.161247 | orchestrator | 2025-02-10 09:51:36 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:51:36.161362 | orchestrator | 2025-02-10 09:51:36 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:51:39.226436 | orchestrator | 2025-02-10 09:51:39 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:51:42.287899 | orchestrator | 2025-02-10 09:51:39 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:51:42.288070 | orchestrator | 2025-02-10 09:51:39 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:51:42.288112 | orchestrator | 2025-02-10 09:51:42 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:51:42.291944 | orchestrator | 2025-02-10 09:51:42 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:51:45.335251 | orchestrator | 2025-02-10 09:51:42 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:51:45.335414 | orchestrator | 2025-02-10 09:51:45 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:51:48.378354 | orchestrator | 2025-02-10 09:51:45 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:51:48.378529 | orchestrator | 2025-02-10 09:51:45 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:51:48.378586 | orchestrator | 2025-02-10 09:51:48 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:51:51.420487 | orchestrator | 2025-02-10 09:51:48 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:51:51.420607 | orchestrator | 2025-02-10 09:51:48 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:51:51.420638 | orchestrator | 2025-02-10 09:51:51 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:51:54.465263 | orchestrator | 2025-02-10 09:51:51 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:51:54.465405 | orchestrator | 2025-02-10 09:51:51 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:51:54.465444 | orchestrator | 2025-02-10 09:51:54 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:51:57.504418 | orchestrator | 2025-02-10 09:51:54 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:51:57.504573 | orchestrator | 2025-02-10 09:51:54 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:51:57.504614 | orchestrator | 2025-02-10 09:51:57 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:52:00.547595 | orchestrator | 2025-02-10 09:51:57 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:52:00.547702 | orchestrator | 2025-02-10 09:51:57 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:52:00.547727 | orchestrator | 2025-02-10 09:52:00 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:52:03.587946 | orchestrator | 2025-02-10 09:52:00 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:52:03.588155 | orchestrator | 2025-02-10 09:52:00 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:52:03.588238 | orchestrator | 2025-02-10 09:52:03 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:52:06.643051 | orchestrator | 2025-02-10 09:52:03 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:52:06.643147 | orchestrator | 2025-02-10 09:52:03 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:52:06.643169 | orchestrator | 2025-02-10 09:52:06 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:52:09.687500 | orchestrator | 2025-02-10 09:52:06 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:52:09.687629 | orchestrator | 2025-02-10 09:52:06 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:52:09.687663 | orchestrator | 2025-02-10 09:52:09 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:52:12.727215 | orchestrator | 2025-02-10 09:52:09 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:52:12.727311 | orchestrator | 2025-02-10 09:52:09 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:52:12.727331 | orchestrator | 2025-02-10 09:52:12 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:52:12.729255 | orchestrator | 2025-02-10 09:52:12 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:52:12.729413 | orchestrator | 2025-02-10 09:52:12 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:52:15.776519 | orchestrator | 2025-02-10 09:52:15 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:52:18.827105 | orchestrator | 2025-02-10 09:52:15 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:52:18.827266 | orchestrator | 2025-02-10 09:52:15 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:52:18.827322 | orchestrator | 2025-02-10 09:52:18 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:52:21.869616 | orchestrator | 2025-02-10 09:52:18 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:52:21.869788 | orchestrator | 2025-02-10 09:52:18 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:52:21.869829 | orchestrator | 2025-02-10 09:52:21 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:52:24.937223 | orchestrator | 2025-02-10 09:52:21 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:52:24.937383 | orchestrator | 2025-02-10 09:52:21 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:52:24.937421 | orchestrator | 2025-02-10 09:52:24 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:52:24.937566 | orchestrator | 2025-02-10 09:52:24 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:52:27.977616 | orchestrator | 2025-02-10 09:52:24 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:52:27.977756 | orchestrator | 2025-02-10 09:52:27 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:52:27.979242 | orchestrator | 2025-02-10 09:52:27 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:52:31.028222 | orchestrator | 2025-02-10 09:52:27 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:52:31.028400 | orchestrator | 2025-02-10 09:52:31 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:52:34.085862 | orchestrator | 2025-02-10 09:52:31 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state STARTED 2025-02-10 09:52:34.085927 | orchestrator | 2025-02-10 09:52:31 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:52:34.085952 | orchestrator | 2025-02-10 09:52:34.091267 | orchestrator | 2025-02-10 09:52:34 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:52:34.091317 | orchestrator | 2025-02-10 09:52:34 | INFO  | Task 39c9f668-65bd-4433-a1fc-4b6f0eed601b is in state SUCCESS 2025-02-10 09:52:34.091346 | orchestrator | 2025-02-10 09:52:34.091363 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:52:34.091378 | orchestrator | 2025-02-10 09:52:34.091392 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:52:34.091491 | orchestrator | Monday 10 February 2025 09:46:35 +0000 (0:00:00.456) 0:00:00.456 ******* 2025-02-10 09:52:34.091509 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:52:34.091555 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:52:34.091570 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:52:34.091584 | orchestrator | 2025-02-10 09:52:34.091601 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:52:34.091617 | orchestrator | Monday 10 February 2025 09:46:35 +0000 (0:00:00.492) 0:00:00.949 ******* 2025-02-10 09:52:34.091632 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-02-10 09:52:34.091648 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-02-10 09:52:34.091664 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-02-10 09:52:34.091680 | orchestrator | 2025-02-10 09:52:34.091696 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-02-10 09:52:34.091712 | orchestrator | 2025-02-10 09:52:34.091728 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-02-10 09:52:34.091743 | orchestrator | Monday 10 February 2025 09:46:36 +0000 (0:00:00.359) 0:00:01.309 ******* 2025-02-10 09:52:34.091759 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:52:34.091777 | orchestrator | 2025-02-10 09:52:34.091793 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-02-10 09:52:34.091809 | orchestrator | Monday 10 February 2025 09:46:37 +0000 (0:00:00.953) 0:00:02.263 ******* 2025-02-10 09:52:34.093168 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-02-10 09:52:34.093205 | orchestrator | 2025-02-10 09:52:34.093220 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-02-10 09:52:34.093234 | orchestrator | Monday 10 February 2025 09:46:40 +0000 (0:00:03.317) 0:00:05.580 ******* 2025-02-10 09:52:34.093248 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-02-10 09:52:34.093263 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-02-10 09:52:34.093277 | orchestrator | 2025-02-10 09:52:34.093291 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-02-10 09:52:34.093305 | orchestrator | Monday 10 February 2025 09:46:47 +0000 (0:00:06.685) 0:00:12.266 ******* 2025-02-10 09:52:34.093319 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-02-10 09:52:34.093332 | orchestrator | 2025-02-10 09:52:34.093346 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-02-10 09:52:34.093360 | orchestrator | Monday 10 February 2025 09:46:51 +0000 (0:00:03.838) 0:00:16.104 ******* 2025-02-10 09:52:34.093373 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-02-10 09:52:34.093387 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-02-10 09:52:34.093401 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-02-10 09:52:34.093414 | orchestrator | 2025-02-10 09:52:34.093428 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-02-10 09:52:34.093441 | orchestrator | Monday 10 February 2025 09:46:59 +0000 (0:00:08.245) 0:00:24.350 ******* 2025-02-10 09:52:34.093455 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-02-10 09:52:34.093468 | orchestrator | 2025-02-10 09:52:34.093482 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-02-10 09:52:34.093496 | orchestrator | Monday 10 February 2025 09:47:02 +0000 (0:00:03.501) 0:00:27.852 ******* 2025-02-10 09:52:34.093509 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-02-10 09:52:34.093523 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-02-10 09:52:34.093536 | orchestrator | 2025-02-10 09:52:34.093550 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-02-10 09:52:34.093564 | orchestrator | Monday 10 February 2025 09:47:11 +0000 (0:00:08.643) 0:00:36.495 ******* 2025-02-10 09:52:34.093593 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-02-10 09:52:34.093607 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-02-10 09:52:34.093621 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-02-10 09:52:34.093634 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-02-10 09:52:34.093648 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-02-10 09:52:34.093661 | orchestrator | 2025-02-10 09:52:34.093675 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-02-10 09:52:34.093689 | orchestrator | Monday 10 February 2025 09:47:27 +0000 (0:00:16.474) 0:00:52.970 ******* 2025-02-10 09:52:34.093703 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:52:34.093716 | orchestrator | 2025-02-10 09:52:34.093730 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-02-10 09:52:34.093743 | orchestrator | Monday 10 February 2025 09:47:29 +0000 (0:00:01.143) 0:00:54.113 ******* 2025-02-10 09:52:34.093757 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:34.093771 | orchestrator | 2025-02-10 09:52:34.093784 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-02-10 09:52:34.093798 | orchestrator | Monday 10 February 2025 09:48:03 +0000 (0:00:34.430) 0:01:28.544 ******* 2025-02-10 09:52:34.093811 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:34.093825 | orchestrator | 2025-02-10 09:52:34.093839 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-02-10 09:52:34.093877 | orchestrator | Monday 10 February 2025 09:48:08 +0000 (0:00:05.328) 0:01:33.872 ******* 2025-02-10 09:52:34.093893 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:52:34.093907 | orchestrator | 2025-02-10 09:52:34.093920 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-02-10 09:52:34.093934 | orchestrator | Monday 10 February 2025 09:48:12 +0000 (0:00:03.916) 0:01:37.789 ******* 2025-02-10 09:52:34.093948 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-02-10 09:52:34.093962 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-02-10 09:52:34.094094 | orchestrator | 2025-02-10 09:52:34.094116 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-02-10 09:52:34.094131 | orchestrator | Monday 10 February 2025 09:48:23 +0000 (0:00:11.202) 0:01:48.991 ******* 2025-02-10 09:52:34.094145 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-02-10 09:52:34.094159 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-02-10 09:52:34.094175 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-02-10 09:52:34.094190 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-02-10 09:52:34.094204 | orchestrator | 2025-02-10 09:52:34.094227 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-02-10 09:52:34.094241 | orchestrator | Monday 10 February 2025 09:48:39 +0000 (0:00:15.451) 0:02:04.443 ******* 2025-02-10 09:52:34.094255 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:34.094268 | orchestrator | 2025-02-10 09:52:34.094283 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-02-10 09:52:34.094296 | orchestrator | Monday 10 February 2025 09:48:44 +0000 (0:00:05.380) 0:02:09.824 ******* 2025-02-10 09:52:34.094310 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:34.094324 | orchestrator | 2025-02-10 09:52:34.094338 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-02-10 09:52:34.094352 | orchestrator | Monday 10 February 2025 09:48:52 +0000 (0:00:07.398) 0:02:17.222 ******* 2025-02-10 09:52:34.094377 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:34.094390 | orchestrator | 2025-02-10 09:52:34.094404 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-02-10 09:52:34.094418 | orchestrator | Monday 10 February 2025 09:48:52 +0000 (0:00:00.358) 0:02:17.580 ******* 2025-02-10 09:52:34.094432 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:34.094445 | orchestrator | 2025-02-10 09:52:34.094459 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-02-10 09:52:34.094472 | orchestrator | Monday 10 February 2025 09:48:57 +0000 (0:00:05.361) 0:02:22.942 ******* 2025-02-10 09:52:34.094486 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-1, testbed-node-0, testbed-node-2 2025-02-10 09:52:34.094500 | orchestrator | 2025-02-10 09:52:34.094514 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-02-10 09:52:34.094527 | orchestrator | Monday 10 February 2025 09:49:00 +0000 (0:00:02.839) 0:02:25.781 ******* 2025-02-10 09:52:34.094541 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:52:34.094555 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:34.094569 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:52:34.094583 | orchestrator | 2025-02-10 09:52:34.094596 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-02-10 09:52:34.094610 | orchestrator | Monday 10 February 2025 09:49:07 +0000 (0:00:06.523) 0:02:32.305 ******* 2025-02-10 09:52:34.094624 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:52:34.094638 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:34.094660 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:52:34.094675 | orchestrator | 2025-02-10 09:52:34.094690 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-02-10 09:52:34.094704 | orchestrator | Monday 10 February 2025 09:49:12 +0000 (0:00:04.987) 0:02:37.292 ******* 2025-02-10 09:52:34.094718 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:34.094732 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:52:34.094746 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:52:34.094760 | orchestrator | 2025-02-10 09:52:34.094774 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-02-10 09:52:34.094788 | orchestrator | Monday 10 February 2025 09:49:13 +0000 (0:00:01.064) 0:02:38.357 ******* 2025-02-10 09:52:34.094801 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:52:34.094815 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:52:34.094830 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:52:34.094843 | orchestrator | 2025-02-10 09:52:34.094857 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-02-10 09:52:34.094871 | orchestrator | Monday 10 February 2025 09:49:15 +0000 (0:00:02.514) 0:02:40.871 ******* 2025-02-10 09:52:34.094885 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:52:34.094899 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:34.094912 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:52:34.094926 | orchestrator | 2025-02-10 09:52:34.094940 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-02-10 09:52:34.094954 | orchestrator | Monday 10 February 2025 09:49:17 +0000 (0:00:01.250) 0:02:42.122 ******* 2025-02-10 09:52:34.094968 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:34.095002 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:52:34.095017 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:52:34.095031 | orchestrator | 2025-02-10 09:52:34.095045 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-02-10 09:52:34.095059 | orchestrator | Monday 10 February 2025 09:49:18 +0000 (0:00:01.389) 0:02:43.512 ******* 2025-02-10 09:52:34.095073 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:34.095087 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:52:34.095101 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:52:34.095114 | orchestrator | 2025-02-10 09:52:34.095140 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-02-10 09:52:34.095163 | orchestrator | Monday 10 February 2025 09:49:21 +0000 (0:00:03.236) 0:02:46.749 ******* 2025-02-10 09:52:34.095177 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:34.095191 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:52:34.095205 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:52:34.095219 | orchestrator | 2025-02-10 09:52:34.095233 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-02-10 09:52:34.095247 | orchestrator | Monday 10 February 2025 09:49:23 +0000 (0:00:02.040) 0:02:48.789 ******* 2025-02-10 09:52:34.095261 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:52:34.095275 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:52:34.095289 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:52:34.095302 | orchestrator | 2025-02-10 09:52:34.095317 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-02-10 09:52:34.095331 | orchestrator | Monday 10 February 2025 09:49:24 +0000 (0:00:00.930) 0:02:49.720 ******* 2025-02-10 09:52:34.095344 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:52:34.095358 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:52:34.095372 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:52:34.095385 | orchestrator | 2025-02-10 09:52:34.095400 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-02-10 09:52:34.095414 | orchestrator | Monday 10 February 2025 09:49:28 +0000 (0:00:03.553) 0:02:53.273 ******* 2025-02-10 09:52:34.095427 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:52:34.095442 | orchestrator | 2025-02-10 09:52:34.095461 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-02-10 09:52:34.095475 | orchestrator | Monday 10 February 2025 09:49:29 +0000 (0:00:00.917) 0:02:54.191 ******* 2025-02-10 09:52:34.095489 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:52:34.095504 | orchestrator | 2025-02-10 09:52:34.095518 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-02-10 09:52:34.095531 | orchestrator | Monday 10 February 2025 09:49:33 +0000 (0:00:04.695) 0:02:58.886 ******* 2025-02-10 09:52:34.095545 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:52:34.095559 | orchestrator | 2025-02-10 09:52:34.095572 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-02-10 09:52:34.095586 | orchestrator | Monday 10 February 2025 09:49:37 +0000 (0:00:03.713) 0:03:02.600 ******* 2025-02-10 09:52:34.095600 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-02-10 09:52:34.095614 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-02-10 09:52:34.095627 | orchestrator | 2025-02-10 09:52:34.095641 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-02-10 09:52:34.095655 | orchestrator | Monday 10 February 2025 09:49:45 +0000 (0:00:07.487) 0:03:10.087 ******* 2025-02-10 09:52:34.095669 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:52:34.095683 | orchestrator | 2025-02-10 09:52:34.095697 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-02-10 09:52:34.095710 | orchestrator | Monday 10 February 2025 09:49:48 +0000 (0:00:03.670) 0:03:13.758 ******* 2025-02-10 09:52:34.095724 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:52:34.095758 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:52:34.095773 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:52:34.095787 | orchestrator | 2025-02-10 09:52:34.095800 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-02-10 09:52:34.095814 | orchestrator | Monday 10 February 2025 09:49:49 +0000 (0:00:00.534) 0:03:14.292 ******* 2025-02-10 09:52:34.095843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-10 09:52:34.095891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-10 09:52:34.095909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-10 09:52:34.095937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-10 09:52:34.096041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-10 09:52:34.096063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-10 09:52:34.096087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-10 09:52:34.096102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-10 09:52:34.096126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-10 09:52:34.096141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-10 09:52:34.096168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-10 09:52:34.096183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-10 09:52:34.096199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:34.096229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:34.096251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:34.096266 | orchestrator | 2025-02-10 09:52:34.096281 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-02-10 09:52:34.096295 | orchestrator | Monday 10 February 2025 09:49:52 +0000 (0:00:02.807) 0:03:17.099 ******* 2025-02-10 09:52:34.096309 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:34.096323 | orchestrator | 2025-02-10 09:52:34.096337 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-02-10 09:52:34.096351 | orchestrator | Monday 10 February 2025 09:49:52 +0000 (0:00:00.144) 0:03:17.244 ******* 2025-02-10 09:52:34.096364 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:34.096378 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:34.096392 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:34.096406 | orchestrator | 2025-02-10 09:52:34.096420 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-02-10 09:52:34.096434 | orchestrator | Monday 10 February 2025 09:49:52 +0000 (0:00:00.499) 0:03:17.743 ******* 2025-02-10 09:52:34.096448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-02-10 09:52:34.096465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-10 09:52:34.096496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-10 09:52:34.096510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-10 09:52:34.096530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:52:34.096543 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:34.096557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-02-10 09:52:34.097844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-10 09:52:34.097876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-10 09:52:34.097904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-10 09:52:34.097935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:52:34.097949 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:34.097990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-02-10 09:52:34.098006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-10 09:52:34.098063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-10 09:52:34.098079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-10 09:52:34.098110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:52:34.098124 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:34.098136 | orchestrator | 2025-02-10 09:52:34.098149 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-02-10 09:52:34.098162 | orchestrator | Monday 10 February 2025 09:49:54 +0000 (0:00:01.945) 0:03:19.688 ******* 2025-02-10 09:52:34.098175 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:52:34.098187 | orchestrator | 2025-02-10 09:52:34.098200 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-02-10 09:52:34.098213 | orchestrator | Monday 10 February 2025 09:49:55 +0000 (0:00:01.088) 0:03:20.777 ******* 2025-02-10 09:52:34.098233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-10 09:52:34.098247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-10 09:52:34.098261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-10 09:52:34.098282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-10 09:52:34.098304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-10 09:52:34.098318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-10 09:52:34.098348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-10 09:52:34.098362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-10 09:52:34.098376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-10 09:52:34.098395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-10 09:52:34.098417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-10 09:52:34.098431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-10 09:52:34.098444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:34.098467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:34.098480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:34.098499 | orchestrator | 2025-02-10 09:52:34.098513 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-02-10 09:52:34.098525 | orchestrator | Monday 10 February 2025 09:50:02 +0000 (0:00:06.634) 0:03:27.411 ******* 2025-02-10 09:52:34.098538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-02-10 09:52:34.098559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-10 09:52:34.098572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-10 09:52:34.098585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-10 09:52:34.098610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:52:34.098624 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:34.098637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-02-10 09:52:34.098669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-10 09:52:34.098683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-10 09:52:34.098696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-10 09:52:34.098709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:52:34.098721 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:34.098759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-02-10 09:52:34.098780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-10 09:52:34.098803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-10 09:52:34.098817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-10 09:52:34.098829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:52:34.098842 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:34.098855 | orchestrator | 2025-02-10 09:52:34.098868 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-02-10 09:52:34.098880 | orchestrator | Monday 10 February 2025 09:50:03 +0000 (0:00:01.080) 0:03:28.492 ******* 2025-02-10 09:52:34.098899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-02-10 09:52:34.098919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-10 09:52:34.098940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-10 09:52:34.098954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-10 09:52:34.098967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:52:34.099030 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:34.099045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-02-10 09:52:34.099065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-10 09:52:34.099095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-10 09:52:34.099110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-10 09:52:34.099123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:52:34.099135 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:34.099148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-02-10 09:52:34.099161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-10 09:52:34.099174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-10 09:52:34.099209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-10 09:52:34.099223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:52:34.099236 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:34.099249 | orchestrator | 2025-02-10 09:52:34.099261 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-02-10 09:52:34.099274 | orchestrator | Monday 10 February 2025 09:50:05 +0000 (0:00:01.670) 0:03:30.162 ******* 2025-02-10 09:52:34.099286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-10 09:52:34.099299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-10 09:52:34.099318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-10 09:52:34.099345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-10 09:52:34.099359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-10 09:52:34.099372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-10 09:52:34.099385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-10 09:52:34.099398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-10 09:52:34.099411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-10 09:52:34.100670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-10 09:52:34.100698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-10 09:52:34.100709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-10 09:52:34.100720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:34.100731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:34.100741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:34.100761 | orchestrator | 2025-02-10 09:52:34.100771 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-02-10 09:52:34.100782 | orchestrator | Monday 10 February 2025 09:50:09 +0000 (0:00:04.445) 0:03:34.608 ******* 2025-02-10 09:52:34.100792 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-02-10 09:52:34.100803 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-02-10 09:52:34.108345 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-02-10 09:52:34.108405 | orchestrator | 2025-02-10 09:52:34.108415 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-02-10 09:52:34.108427 | orchestrator | Monday 10 February 2025 09:50:12 +0000 (0:00:02.710) 0:03:37.318 ******* 2025-02-10 09:52:34.108452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-10 09:52:34.108463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-10 09:52:34.108472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-10 09:52:34.108481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-10 09:52:34.108505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-10 09:52:34.108522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-10 09:52:34.108531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-10 09:52:34.108541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-10 09:52:34.108549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-10 09:52:34.108557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-10 09:52:34.108570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-10 09:52:34.108578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-10 09:52:34.108591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:34.108600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:34.108608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:34.108616 | orchestrator | 2025-02-10 09:52:34.108625 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-02-10 09:52:34.108633 | orchestrator | Monday 10 February 2025 09:50:36 +0000 (0:00:24.307) 0:04:01.626 ******* 2025-02-10 09:52:34.108640 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:34.108649 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:52:34.108657 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:52:34.108664 | orchestrator | 2025-02-10 09:52:34.108672 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-02-10 09:52:34.108680 | orchestrator | Monday 10 February 2025 09:50:38 +0000 (0:00:02.382) 0:04:04.009 ******* 2025-02-10 09:52:34.108688 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-02-10 09:52:34.108701 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-02-10 09:52:34.108709 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-02-10 09:52:34.108717 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-02-10 09:52:34.108725 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-02-10 09:52:34.108733 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-02-10 09:52:34.108741 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-02-10 09:52:34.108752 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-02-10 09:52:34.108760 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-02-10 09:52:34.108768 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-02-10 09:52:34.108776 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-02-10 09:52:34.108784 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-02-10 09:52:34.108792 | orchestrator | 2025-02-10 09:52:34.108800 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-02-10 09:52:34.108807 | orchestrator | Monday 10 February 2025 09:50:49 +0000 (0:00:10.203) 0:04:14.213 ******* 2025-02-10 09:52:34.108815 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-02-10 09:52:34.108823 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-02-10 09:52:34.108831 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-02-10 09:52:34.108838 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-02-10 09:52:34.108846 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-02-10 09:52:34.108854 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-02-10 09:52:34.108862 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-02-10 09:52:34.108869 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-02-10 09:52:34.108877 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-02-10 09:52:34.108885 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-02-10 09:52:34.108893 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-02-10 09:52:34.108900 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-02-10 09:52:34.108908 | orchestrator | 2025-02-10 09:52:34.108916 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-02-10 09:52:34.108924 | orchestrator | Monday 10 February 2025 09:50:56 +0000 (0:00:07.102) 0:04:21.315 ******* 2025-02-10 09:52:34.108936 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-02-10 09:52:34.108944 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-02-10 09:52:34.108952 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-02-10 09:52:34.108960 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-02-10 09:52:34.108967 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-02-10 09:52:34.109018 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-02-10 09:52:34.109027 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-02-10 09:52:34.109035 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-02-10 09:52:34.109043 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-02-10 09:52:34.109051 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-02-10 09:52:34.109058 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-02-10 09:52:34.109066 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-02-10 09:52:34.109074 | orchestrator | 2025-02-10 09:52:34.109082 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-02-10 09:52:34.109090 | orchestrator | Monday 10 February 2025 09:51:02 +0000 (0:00:06.656) 0:04:27.971 ******* 2025-02-10 09:52:34.109103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-10 09:52:34.109112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-10 09:52:34.109121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-10 09:52:34.109136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-10 09:52:34.109145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-10 09:52:34.109158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-10 09:52:34.109166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-10 09:52:34.109175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-10 09:52:34.109195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-10 09:52:34.109204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-10 09:52:34.109220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-10 09:52:34.109229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-10 09:52:34.109241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:34.109250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:34.109258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:34.109266 | orchestrator | 2025-02-10 09:52:34.109274 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-02-10 09:52:34.109282 | orchestrator | Monday 10 February 2025 09:51:07 +0000 (0:00:04.679) 0:04:32.651 ******* 2025-02-10 09:52:34.109289 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:34.109296 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:34.109303 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:34.109314 | orchestrator | 2025-02-10 09:52:34.109321 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-02-10 09:52:34.109328 | orchestrator | Monday 10 February 2025 09:51:07 +0000 (0:00:00.344) 0:04:32.996 ******* 2025-02-10 09:52:34.109335 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:34.109342 | orchestrator | 2025-02-10 09:52:34.109349 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-02-10 09:52:34.109356 | orchestrator | Monday 10 February 2025 09:51:10 +0000 (0:00:02.423) 0:04:35.420 ******* 2025-02-10 09:52:34.109363 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:34.109370 | orchestrator | 2025-02-10 09:52:34.109376 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-02-10 09:52:34.109383 | orchestrator | Monday 10 February 2025 09:51:13 +0000 (0:00:02.906) 0:04:38.326 ******* 2025-02-10 09:52:34.109390 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:34.109397 | orchestrator | 2025-02-10 09:52:34.109404 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-02-10 09:52:34.109412 | orchestrator | Monday 10 February 2025 09:51:16 +0000 (0:00:02.725) 0:04:41.051 ******* 2025-02-10 09:52:34.109419 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:34.109425 | orchestrator | 2025-02-10 09:52:34.109436 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-02-10 09:52:34.109443 | orchestrator | Monday 10 February 2025 09:51:18 +0000 (0:00:02.606) 0:04:43.658 ******* 2025-02-10 09:52:34.109454 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:34.109461 | orchestrator | 2025-02-10 09:52:34.109468 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-02-10 09:52:34.109475 | orchestrator | Monday 10 February 2025 09:51:37 +0000 (0:00:18.879) 0:05:02.538 ******* 2025-02-10 09:52:34.109482 | orchestrator | 2025-02-10 09:52:34.109489 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-02-10 09:52:34.109495 | orchestrator | Monday 10 February 2025 09:51:38 +0000 (0:00:00.514) 0:05:03.052 ******* 2025-02-10 09:52:34.109502 | orchestrator | 2025-02-10 09:52:34.109509 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-02-10 09:52:34.109516 | orchestrator | Monday 10 February 2025 09:51:38 +0000 (0:00:00.098) 0:05:03.150 ******* 2025-02-10 09:52:34.109523 | orchestrator | 2025-02-10 09:52:34.109530 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-02-10 09:52:34.109537 | orchestrator | Monday 10 February 2025 09:51:38 +0000 (0:00:00.124) 0:05:03.275 ******* 2025-02-10 09:52:34.109544 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:34.109551 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:52:34.109558 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:52:34.109564 | orchestrator | 2025-02-10 09:52:34.109571 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-02-10 09:52:34.109581 | orchestrator | Monday 10 February 2025 09:51:54 +0000 (0:00:16.259) 0:05:19.534 ******* 2025-02-10 09:52:34.109588 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:52:34.109595 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:52:34.109602 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:34.109609 | orchestrator | 2025-02-10 09:52:34.109616 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-02-10 09:52:34.109623 | orchestrator | Monday 10 February 2025 09:52:05 +0000 (0:00:10.852) 0:05:30.387 ******* 2025-02-10 09:52:34.109630 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:34.109636 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:52:34.109643 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:52:34.109650 | orchestrator | 2025-02-10 09:52:34.109657 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-02-10 09:52:34.109664 | orchestrator | Monday 10 February 2025 09:52:11 +0000 (0:00:06.298) 0:05:36.686 ******* 2025-02-10 09:52:34.109671 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:34.109678 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:52:34.109684 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:52:34.109691 | orchestrator | 2025-02-10 09:52:34.109698 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-02-10 09:52:34.109705 | orchestrator | Monday 10 February 2025 09:52:21 +0000 (0:00:10.027) 0:05:46.714 ******* 2025-02-10 09:52:34.109712 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:34.109719 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:52:34.109726 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:52:34.109732 | orchestrator | 2025-02-10 09:52:34.109739 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:52:34.109746 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-02-10 09:52:34.109756 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-10 09:52:34.109763 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-10 09:52:34.109770 | orchestrator | 2025-02-10 09:52:34.109777 | orchestrator | 2025-02-10 09:52:34.109784 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:52:34.109799 | orchestrator | Monday 10 February 2025 09:52:32 +0000 (0:00:10.930) 0:05:57.645 ******* 2025-02-10 09:52:34.109805 | orchestrator | =============================================================================== 2025-02-10 09:52:34.109812 | orchestrator | octavia : Create amphora flavor ---------------------------------------- 34.43s 2025-02-10 09:52:34.109819 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 24.31s 2025-02-10 09:52:34.109826 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 18.88s 2025-02-10 09:52:34.109833 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.47s 2025-02-10 09:52:34.109840 | orchestrator | octavia : Restart octavia-api container -------------------------------- 16.26s 2025-02-10 09:52:34.109847 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.45s 2025-02-10 09:52:34.109853 | orchestrator | octavia : Create security groups for octavia --------------------------- 11.20s 2025-02-10 09:52:34.109860 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.93s 2025-02-10 09:52:34.109867 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 10.85s 2025-02-10 09:52:34.109874 | orchestrator | octavia : Copying certificate files for octavia-worker ----------------- 10.20s 2025-02-10 09:52:34.109881 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.03s 2025-02-10 09:52:34.109888 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 8.64s 2025-02-10 09:52:34.109894 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.25s 2025-02-10 09:52:34.109901 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.49s 2025-02-10 09:52:34.109908 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 7.40s 2025-02-10 09:52:34.109915 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 7.10s 2025-02-10 09:52:34.109922 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.69s 2025-02-10 09:52:34.109932 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 6.66s 2025-02-10 09:52:37.145966 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 6.63s 2025-02-10 09:52:37.146205 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 6.52s 2025-02-10 09:52:37.146226 | orchestrator | 2025-02-10 09:52:34 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:52:37.146262 | orchestrator | 2025-02-10 09:52:37 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state STARTED 2025-02-10 09:52:40.189943 | orchestrator | 2025-02-10 09:52:37 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:52:40.190196 | orchestrator | 2025-02-10 09:52:40 | INFO  | Task ea200019-1718-4db7-a175-86657831b4b9 is in state SUCCESS 2025-02-10 09:52:40.192270 | orchestrator | 2025-02-10 09:52:40.192350 | orchestrator | 2025-02-10 09:52:40.192358 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:52:40.192366 | orchestrator | 2025-02-10 09:52:40.192387 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-02-10 09:52:40.192396 | orchestrator | Monday 10 February 2025 09:42:59 +0000 (0:00:00.232) 0:00:00.233 ******* 2025-02-10 09:52:40.192499 | orchestrator | changed: [testbed-manager] 2025-02-10 09:52:40.192512 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:40.192521 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:52:40.192526 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:52:40.192531 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:52:40.192537 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:52:40.192542 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:52:40.192548 | orchestrator | 2025-02-10 09:52:40.192552 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:52:40.192558 | orchestrator | Monday 10 February 2025 09:43:00 +0000 (0:00:00.827) 0:00:01.060 ******* 2025-02-10 09:52:40.192585 | orchestrator | changed: [testbed-manager] 2025-02-10 09:52:40.192591 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:40.192595 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:52:40.192600 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:52:40.192606 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:52:40.192611 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:52:40.192616 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:52:40.192621 | orchestrator | 2025-02-10 09:52:40.192627 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:52:40.192875 | orchestrator | Monday 10 February 2025 09:43:00 +0000 (0:00:00.929) 0:00:01.990 ******* 2025-02-10 09:52:40.192885 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-02-10 09:52:40.192891 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-02-10 09:52:40.192896 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-02-10 09:52:40.192903 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-02-10 09:52:40.192908 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-02-10 09:52:40.192941 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-02-10 09:52:40.192948 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-02-10 09:52:40.192953 | orchestrator | 2025-02-10 09:52:40.192958 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-02-10 09:52:40.192964 | orchestrator | 2025-02-10 09:52:40.192987 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-02-10 09:52:40.192993 | orchestrator | Monday 10 February 2025 09:43:02 +0000 (0:00:01.778) 0:00:03.768 ******* 2025-02-10 09:52:40.192999 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:52:40.193004 | orchestrator | 2025-02-10 09:52:40.193009 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-02-10 09:52:40.193015 | orchestrator | Monday 10 February 2025 09:43:03 +0000 (0:00:00.618) 0:00:04.387 ******* 2025-02-10 09:52:40.193020 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-02-10 09:52:40.193026 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-02-10 09:52:40.193031 | orchestrator | 2025-02-10 09:52:40.193037 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-02-10 09:52:40.193042 | orchestrator | Monday 10 February 2025 09:43:07 +0000 (0:00:04.031) 0:00:08.418 ******* 2025-02-10 09:52:40.193049 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-02-10 09:52:40.193055 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-02-10 09:52:40.193061 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:40.193066 | orchestrator | 2025-02-10 09:52:40.193071 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-02-10 09:52:40.193076 | orchestrator | Monday 10 February 2025 09:43:12 +0000 (0:00:04.636) 0:00:13.055 ******* 2025-02-10 09:52:40.193081 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:40.193086 | orchestrator | 2025-02-10 09:52:40.193091 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-02-10 09:52:40.193096 | orchestrator | Monday 10 February 2025 09:43:12 +0000 (0:00:00.805) 0:00:13.860 ******* 2025-02-10 09:52:40.193101 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:40.193106 | orchestrator | 2025-02-10 09:52:40.193111 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-02-10 09:52:40.193116 | orchestrator | Monday 10 February 2025 09:43:14 +0000 (0:00:01.510) 0:00:15.370 ******* 2025-02-10 09:52:40.193122 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:40.193128 | orchestrator | 2025-02-10 09:52:40.193133 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-02-10 09:52:40.193139 | orchestrator | Monday 10 February 2025 09:43:20 +0000 (0:00:06.442) 0:00:21.813 ******* 2025-02-10 09:52:40.193145 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.193150 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.193165 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.193170 | orchestrator | 2025-02-10 09:52:40.193175 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-02-10 09:52:40.193180 | orchestrator | Monday 10 February 2025 09:43:22 +0000 (0:00:01.243) 0:00:23.057 ******* 2025-02-10 09:52:40.193186 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:52:40.193192 | orchestrator | 2025-02-10 09:52:40.193197 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-02-10 09:52:40.193202 | orchestrator | Monday 10 February 2025 09:43:50 +0000 (0:00:28.603) 0:00:51.661 ******* 2025-02-10 09:52:40.193207 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:40.193213 | orchestrator | 2025-02-10 09:52:40.193217 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-02-10 09:52:40.193222 | orchestrator | Monday 10 February 2025 09:44:03 +0000 (0:00:12.911) 0:01:04.572 ******* 2025-02-10 09:52:40.193227 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:52:40.193233 | orchestrator | 2025-02-10 09:52:40.193237 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-02-10 09:52:40.193242 | orchestrator | Monday 10 February 2025 09:44:17 +0000 (0:00:13.661) 0:01:18.233 ******* 2025-02-10 09:52:40.193255 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:52:40.193261 | orchestrator | 2025-02-10 09:52:40.193266 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-02-10 09:52:40.193270 | orchestrator | Monday 10 February 2025 09:44:19 +0000 (0:00:01.936) 0:01:20.170 ******* 2025-02-10 09:52:40.193276 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.193281 | orchestrator | 2025-02-10 09:52:40.193286 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-02-10 09:52:40.193297 | orchestrator | Monday 10 February 2025 09:44:20 +0000 (0:00:00.932) 0:01:21.103 ******* 2025-02-10 09:52:40.193304 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:52:40.193310 | orchestrator | 2025-02-10 09:52:40.193319 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-02-10 09:52:40.193458 | orchestrator | Monday 10 February 2025 09:44:22 +0000 (0:00:02.758) 0:01:23.862 ******* 2025-02-10 09:52:40.193470 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:52:40.193475 | orchestrator | 2025-02-10 09:52:40.193480 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-02-10 09:52:40.193485 | orchestrator | Monday 10 February 2025 09:44:39 +0000 (0:00:16.434) 0:01:40.296 ******* 2025-02-10 09:52:40.193491 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.193496 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.193501 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.193506 | orchestrator | 2025-02-10 09:52:40.193511 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-02-10 09:52:40.193517 | orchestrator | 2025-02-10 09:52:40.193526 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-02-10 09:52:40.193535 | orchestrator | Monday 10 February 2025 09:44:41 +0000 (0:00:01.903) 0:01:42.199 ******* 2025-02-10 09:52:40.193545 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:52:40.193555 | orchestrator | 2025-02-10 09:52:40.193565 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-02-10 09:52:40.193576 | orchestrator | Monday 10 February 2025 09:44:45 +0000 (0:00:04.709) 0:01:46.909 ******* 2025-02-10 09:52:40.193586 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.193601 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.193610 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:40.193616 | orchestrator | 2025-02-10 09:52:40.193621 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-02-10 09:52:40.193627 | orchestrator | Monday 10 February 2025 09:44:48 +0000 (0:00:02.898) 0:01:49.808 ******* 2025-02-10 09:52:40.193632 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.193645 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.193650 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:40.193655 | orchestrator | 2025-02-10 09:52:40.193661 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-02-10 09:52:40.193666 | orchestrator | Monday 10 February 2025 09:44:50 +0000 (0:00:01.901) 0:01:51.709 ******* 2025-02-10 09:52:40.193672 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.193677 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.193682 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.193687 | orchestrator | 2025-02-10 09:52:40.193697 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-02-10 09:52:40.193703 | orchestrator | Monday 10 February 2025 09:44:51 +0000 (0:00:01.020) 0:01:52.730 ******* 2025-02-10 09:52:40.193708 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-02-10 09:52:40.193714 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.193720 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-02-10 09:52:40.193726 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.193732 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-02-10 09:52:40.193737 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-02-10 09:52:40.193743 | orchestrator | 2025-02-10 09:52:40.193748 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-02-10 09:52:40.193754 | orchestrator | Monday 10 February 2025 09:45:00 +0000 (0:00:08.524) 0:02:01.255 ******* 2025-02-10 09:52:40.193763 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.193772 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.193780 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.193788 | orchestrator | 2025-02-10 09:52:40.193795 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-02-10 09:52:40.193803 | orchestrator | Monday 10 February 2025 09:45:01 +0000 (0:00:01.029) 0:02:02.285 ******* 2025-02-10 09:52:40.193812 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-02-10 09:52:40.193821 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.193830 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-02-10 09:52:40.193838 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.193849 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-02-10 09:52:40.193858 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.193867 | orchestrator | 2025-02-10 09:52:40.194213 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-02-10 09:52:40.194242 | orchestrator | Monday 10 February 2025 09:45:02 +0000 (0:00:01.338) 0:02:03.623 ******* 2025-02-10 09:52:40.194248 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.194254 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.194260 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:40.194266 | orchestrator | 2025-02-10 09:52:40.194271 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-02-10 09:52:40.194277 | orchestrator | Monday 10 February 2025 09:45:03 +0000 (0:00:00.613) 0:02:04.236 ******* 2025-02-10 09:52:40.194283 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.194288 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.194293 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:40.194299 | orchestrator | 2025-02-10 09:52:40.194304 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-02-10 09:52:40.194309 | orchestrator | Monday 10 February 2025 09:45:04 +0000 (0:00:01.106) 0:02:05.342 ******* 2025-02-10 09:52:40.194314 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.194320 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.194379 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:40.194388 | orchestrator | 2025-02-10 09:52:40.194393 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-02-10 09:52:40.194399 | orchestrator | Monday 10 February 2025 09:45:07 +0000 (0:00:03.660) 0:02:09.003 ******* 2025-02-10 09:52:40.194404 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.194421 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.194431 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:52:40.194440 | orchestrator | 2025-02-10 09:52:40.194449 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-02-10 09:52:40.194458 | orchestrator | Monday 10 February 2025 09:45:29 +0000 (0:00:22.021) 0:02:31.024 ******* 2025-02-10 09:52:40.194466 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.194476 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.194485 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:52:40.194495 | orchestrator | 2025-02-10 09:52:40.194510 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-02-10 09:52:40.194519 | orchestrator | Monday 10 February 2025 09:45:44 +0000 (0:00:14.260) 0:02:45.284 ******* 2025-02-10 09:52:40.194528 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:52:40.194536 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.194543 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.194549 | orchestrator | 2025-02-10 09:52:40.194554 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-02-10 09:52:40.194604 | orchestrator | Monday 10 February 2025 09:45:46 +0000 (0:00:02.286) 0:02:47.570 ******* 2025-02-10 09:52:40.194613 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.194658 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.194670 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:40.194679 | orchestrator | 2025-02-10 09:52:40.194688 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-02-10 09:52:40.194696 | orchestrator | Monday 10 February 2025 09:46:03 +0000 (0:00:17.081) 0:03:04.651 ******* 2025-02-10 09:52:40.194705 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.194713 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.194719 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.194724 | orchestrator | 2025-02-10 09:52:40.199089 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-02-10 09:52:40.199138 | orchestrator | Monday 10 February 2025 09:46:05 +0000 (0:00:01.872) 0:03:06.524 ******* 2025-02-10 09:52:40.199154 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.199171 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.199185 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.199200 | orchestrator | 2025-02-10 09:52:40.199215 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-02-10 09:52:40.199229 | orchestrator | 2025-02-10 09:52:40.199244 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-02-10 09:52:40.199258 | orchestrator | Monday 10 February 2025 09:46:06 +0000 (0:00:00.556) 0:03:07.081 ******* 2025-02-10 09:52:40.199272 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:52:40.199289 | orchestrator | 2025-02-10 09:52:40.199303 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-02-10 09:52:40.199318 | orchestrator | Monday 10 February 2025 09:46:06 +0000 (0:00:00.559) 0:03:07.640 ******* 2025-02-10 09:52:40.199413 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-02-10 09:52:40.199431 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-02-10 09:52:40.199445 | orchestrator | 2025-02-10 09:52:40.199460 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-02-10 09:52:40.199474 | orchestrator | Monday 10 February 2025 09:46:10 +0000 (0:00:03.486) 0:03:11.126 ******* 2025-02-10 09:52:40.199488 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-02-10 09:52:40.199504 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-02-10 09:52:40.199518 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-02-10 09:52:40.199568 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-02-10 09:52:40.199583 | orchestrator | 2025-02-10 09:52:40.199598 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-02-10 09:52:40.199613 | orchestrator | Monday 10 February 2025 09:46:16 +0000 (0:00:06.510) 0:03:17.637 ******* 2025-02-10 09:52:40.199642 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-02-10 09:52:40.199658 | orchestrator | 2025-02-10 09:52:40.199672 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-02-10 09:52:40.199687 | orchestrator | Monday 10 February 2025 09:46:20 +0000 (0:00:03.774) 0:03:21.411 ******* 2025-02-10 09:52:40.199701 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-02-10 09:52:40.199715 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-02-10 09:52:40.199730 | orchestrator | 2025-02-10 09:52:40.199744 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-02-10 09:52:40.199759 | orchestrator | Monday 10 February 2025 09:46:24 +0000 (0:00:04.407) 0:03:25.819 ******* 2025-02-10 09:52:40.199773 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-02-10 09:52:40.199787 | orchestrator | 2025-02-10 09:52:40.199802 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-02-10 09:52:40.199817 | orchestrator | Monday 10 February 2025 09:46:28 +0000 (0:00:04.051) 0:03:29.871 ******* 2025-02-10 09:52:40.199831 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-02-10 09:52:40.199845 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-02-10 09:52:40.199860 | orchestrator | 2025-02-10 09:52:40.199874 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-02-10 09:52:40.199906 | orchestrator | Monday 10 February 2025 09:46:38 +0000 (0:00:09.322) 0:03:39.193 ******* 2025-02-10 09:52:40.199927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-10 09:52:40.199949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-10 09:52:40.199997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-10 09:52:40.200028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.200047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.200063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.200078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.200100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.200115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.200129 | orchestrator | 2025-02-10 09:52:40.200144 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-02-10 09:52:40.200158 | orchestrator | Monday 10 February 2025 09:46:39 +0000 (0:00:01.400) 0:03:40.594 ******* 2025-02-10 09:52:40.200172 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.200186 | orchestrator | 2025-02-10 09:52:40.200201 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-02-10 09:52:40.200215 | orchestrator | Monday 10 February 2025 09:46:39 +0000 (0:00:00.230) 0:03:40.824 ******* 2025-02-10 09:52:40.200228 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.200243 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.200256 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.200270 | orchestrator | 2025-02-10 09:52:40.200285 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-02-10 09:52:40.200299 | orchestrator | Monday 10 February 2025 09:46:40 +0000 (0:00:00.281) 0:03:41.106 ******* 2025-02-10 09:52:40.200319 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-10 09:52:40.200335 | orchestrator | 2025-02-10 09:52:40.200349 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-02-10 09:52:40.200373 | orchestrator | Monday 10 February 2025 09:46:40 +0000 (0:00:00.487) 0:03:41.594 ******* 2025-02-10 09:52:40.200388 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.200403 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.200423 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.200438 | orchestrator | 2025-02-10 09:52:40.200452 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-02-10 09:52:40.200467 | orchestrator | Monday 10 February 2025 09:46:40 +0000 (0:00:00.265) 0:03:41.859 ******* 2025-02-10 09:52:40.200481 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:52:40.200496 | orchestrator | 2025-02-10 09:52:40.200510 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-02-10 09:52:40.200524 | orchestrator | Monday 10 February 2025 09:46:41 +0000 (0:00:00.750) 0:03:42.610 ******* 2025-02-10 09:52:40.200539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-10 09:52:40.200561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-10 09:52:40.200587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-10 09:52:40.200604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.200619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.200640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.200654 | orchestrator | 2025-02-10 09:52:40.200669 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-02-10 09:52:40.200683 | orchestrator | Monday 10 February 2025 09:46:44 +0000 (0:00:02.681) 0:03:45.291 ******* 2025-02-10 09:52:40.200697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-02-10 09:52:40.200722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.200737 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.200751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-02-10 09:52:40.200774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.200788 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.200803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-02-10 09:52:40.200818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.200833 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.200847 | orchestrator | 2025-02-10 09:52:40.200861 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-02-10 09:52:40.200882 | orchestrator | Monday 10 February 2025 09:46:44 +0000 (0:00:00.670) 0:03:45.962 ******* 2025-02-10 09:52:40.200897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-02-10 09:52:40.200920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.200935 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.200950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-02-10 09:52:40.200966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.201006 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.201031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-02-10 09:52:40.201055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.201069 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.201083 | orchestrator | 2025-02-10 09:52:40.201098 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-02-10 09:52:40.201112 | orchestrator | Monday 10 February 2025 09:46:46 +0000 (0:00:01.299) 0:03:47.261 ******* 2025-02-10 09:52:40.201127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-10 09:52:40.201152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-10 09:52:40.201175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-10 09:52:40.201191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.201206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.201221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.201236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.201272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.201296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.201312 | orchestrator | 2025-02-10 09:52:40.201327 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-02-10 09:52:40.201341 | orchestrator | Monday 10 February 2025 09:46:49 +0000 (0:00:03.112) 0:03:50.374 ******* 2025-02-10 09:52:40.201357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-10 09:52:40.201372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-10 09:52:40.201400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-10 09:52:40.201424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.201439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.201453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.201468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.201483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.201523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.201540 | orchestrator | 2025-02-10 09:52:40.201555 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-02-10 09:52:40.201571 | orchestrator | Monday 10 February 2025 09:46:56 +0000 (0:00:07.514) 0:03:57.889 ******* 2025-02-10 09:52:40.201586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-02-10 09:52:40.201601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.201616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.201631 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.201710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-02-10 09:52:40.201735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-02-10 09:52:40.201751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.201766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.201781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.201802 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.201817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.201838 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.201853 | orchestrator | 2025-02-10 09:52:40.201867 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-02-10 09:52:40.201881 | orchestrator | Monday 10 February 2025 09:46:57 +0000 (0:00:00.955) 0:03:58.844 ******* 2025-02-10 09:52:40.201896 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:52:40.201910 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:40.201924 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:52:40.201938 | orchestrator | 2025-02-10 09:52:40.201951 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-02-10 09:52:40.201966 | orchestrator | Monday 10 February 2025 09:46:59 +0000 (0:00:01.997) 0:04:00.841 ******* 2025-02-10 09:52:40.202063 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.202079 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.202094 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.202107 | orchestrator | 2025-02-10 09:52:40.202121 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-02-10 09:52:40.202136 | orchestrator | Monday 10 February 2025 09:47:00 +0000 (0:00:00.411) 0:04:01.253 ******* 2025-02-10 09:52:40.202150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-10 09:52:40.202166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-10 09:52:40.202203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-10 09:52:40.202219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.202234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.202249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.202264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.202285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.202300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.202315 | orchestrator | 2025-02-10 09:52:40.202335 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-02-10 09:52:40.202350 | orchestrator | Monday 10 February 2025 09:47:03 +0000 (0:00:03.010) 0:04:04.264 ******* 2025-02-10 09:52:40.202365 | orchestrator | 2025-02-10 09:52:40.202379 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-02-10 09:52:40.202393 | orchestrator | Monday 10 February 2025 09:47:03 +0000 (0:00:00.360) 0:04:04.624 ******* 2025-02-10 09:52:40.202407 | orchestrator | 2025-02-10 09:52:40.202421 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-02-10 09:52:40.202435 | orchestrator | Monday 10 February 2025 09:47:03 +0000 (0:00:00.129) 0:04:04.754 ******* 2025-02-10 09:52:40.202449 | orchestrator | 2025-02-10 09:52:40.202463 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-02-10 09:52:40.202477 | orchestrator | Monday 10 February 2025 09:47:04 +0000 (0:00:00.339) 0:04:05.094 ******* 2025-02-10 09:52:40.202491 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:40.202506 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:52:40.202520 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:52:40.202534 | orchestrator | 2025-02-10 09:52:40.202548 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-02-10 09:52:40.202562 | orchestrator | Monday 10 February 2025 09:47:18 +0000 (0:00:14.168) 0:04:19.262 ******* 2025-02-10 09:52:40.202576 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:52:40.202589 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:52:40.202603 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:40.202629 | orchestrator | 2025-02-10 09:52:40.202644 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-02-10 09:52:40.202658 | orchestrator | 2025-02-10 09:52:40.202682 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-02-10 09:52:40.202697 | orchestrator | Monday 10 February 2025 09:47:29 +0000 (0:00:11.274) 0:04:30.537 ******* 2025-02-10 09:52:40.202711 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:52:40.202725 | orchestrator | 2025-02-10 09:52:40.202739 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-02-10 09:52:40.202753 | orchestrator | Monday 10 February 2025 09:47:31 +0000 (0:00:01.707) 0:04:32.244 ******* 2025-02-10 09:52:40.202766 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:52:40.202780 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:52:40.202803 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:52:40.202817 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.202831 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.202845 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.202860 | orchestrator | 2025-02-10 09:52:40.202874 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-02-10 09:52:40.202888 | orchestrator | Monday 10 February 2025 09:47:32 +0000 (0:00:00.915) 0:04:33.160 ******* 2025-02-10 09:52:40.202902 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.202917 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.202932 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.202945 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:52:40.202960 | orchestrator | 2025-02-10 09:52:40.202993 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-02-10 09:52:40.203008 | orchestrator | Monday 10 February 2025 09:47:33 +0000 (0:00:01.476) 0:04:34.637 ******* 2025-02-10 09:52:40.203022 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-02-10 09:52:40.203036 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-02-10 09:52:40.203051 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-02-10 09:52:40.203065 | orchestrator | 2025-02-10 09:52:40.203080 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-02-10 09:52:40.203094 | orchestrator | Monday 10 February 2025 09:47:34 +0000 (0:00:00.744) 0:04:35.382 ******* 2025-02-10 09:52:40.203108 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-02-10 09:52:40.203121 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-02-10 09:52:40.203136 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-02-10 09:52:40.203149 | orchestrator | 2025-02-10 09:52:40.203164 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-02-10 09:52:40.203178 | orchestrator | Monday 10 February 2025 09:47:35 +0000 (0:00:01.493) 0:04:36.875 ******* 2025-02-10 09:52:40.203192 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-02-10 09:52:40.203207 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:52:40.203221 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-02-10 09:52:40.203235 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:52:40.203249 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-02-10 09:52:40.203263 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:52:40.203277 | orchestrator | 2025-02-10 09:52:40.203291 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-02-10 09:52:40.203306 | orchestrator | Monday 10 February 2025 09:47:36 +0000 (0:00:00.809) 0:04:37.685 ******* 2025-02-10 09:52:40.203319 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-02-10 09:52:40.203334 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-02-10 09:52:40.203348 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.203362 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-02-10 09:52:40.203376 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-02-10 09:52:40.203390 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-02-10 09:52:40.203404 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-02-10 09:52:40.203417 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.203441 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-02-10 09:52:40.203455 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-02-10 09:52:40.203469 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-02-10 09:52:40.203483 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.203497 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-02-10 09:52:40.203521 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-02-10 09:52:40.203535 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-02-10 09:52:40.203549 | orchestrator | 2025-02-10 09:52:40.203563 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-02-10 09:52:40.203577 | orchestrator | Monday 10 February 2025 09:47:38 +0000 (0:00:01.758) 0:04:39.443 ******* 2025-02-10 09:52:40.203591 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.203605 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.203619 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:52:40.203632 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.203647 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:52:40.203661 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:52:40.203675 | orchestrator | 2025-02-10 09:52:40.203688 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-02-10 09:52:40.203702 | orchestrator | Monday 10 February 2025 09:47:39 +0000 (0:00:01.444) 0:04:40.887 ******* 2025-02-10 09:52:40.203716 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.203730 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.203744 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.203758 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:52:40.203771 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:52:40.203785 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:52:40.203799 | orchestrator | 2025-02-10 09:52:40.203813 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-02-10 09:52:40.203871 | orchestrator | Monday 10 February 2025 09:47:41 +0000 (0:00:01.934) 0:04:42.822 ******* 2025-02-10 09:52:40.203890 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-02-10 09:52:40.203906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:52:40.203921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:52:40.203953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:52:40.203984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:52:40.204009 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-02-10 09:52:40.204025 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.204040 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.204104 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:52:40.204140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-10 09:52:40.204157 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.204172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.204188 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-02-10 09:52:40.204204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:52:40.204219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-10 09:52:40.204240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.204263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:52:40.204953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:52:40.205068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:52:40.205078 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-02-10 09:52:40.205085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.205124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.205134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.205157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.205166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-10 09:52:40.205175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.205185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:52:40.205201 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.205217 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.205228 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-02-10 09:52:40.205237 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.205246 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.205255 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-02-10 09:52:40.205277 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:52:40.205287 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.205292 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.205298 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.205306 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:52:40.205315 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.205323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.205336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.205348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.205357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.205365 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.205375 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.205394 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.205403 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.205418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.205425 | orchestrator | 2025-02-10 09:52:40.205431 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-02-10 09:52:40.205437 | orchestrator | Monday 10 February 2025 09:47:45 +0000 (0:00:03.304) 0:04:46.126 ******* 2025-02-10 09:52:40.205443 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:52:40.205452 | orchestrator | 2025-02-10 09:52:40.205460 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-02-10 09:52:40.205469 | orchestrator | Monday 10 February 2025 09:47:46 +0000 (0:00:01.778) 0:04:47.905 ******* 2025-02-10 09:52:40.205477 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-02-10 09:52:40.205486 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-02-10 09:52:40.205501 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-02-10 09:52:40.205516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-10 09:52:40.205524 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-02-10 09:52:40.205533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-10 09:52:40.205542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-10 09:52:40.205555 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-02-10 09:52:40.205564 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-02-10 09:52:40.205573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.205590 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.205600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.205609 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.205622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.205631 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.205638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.205653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.205659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.205664 | orchestrator | 2025-02-10 09:52:40.205670 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-02-10 09:52:40.205675 | orchestrator | Monday 10 February 2025 09:47:52 +0000 (0:00:05.708) 0:04:53.614 ******* 2025-02-10 09:52:40.205681 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:52:40.205690 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:52:40.205696 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.205701 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:52:40.205723 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:52:40.205730 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:52:40.205735 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.205745 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:52:40.205752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.205758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.205764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.205769 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.205782 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:52:40.205788 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:52:40.205797 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.205803 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:52:40.205810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.205816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.205824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.205832 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.205854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.205864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.205876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.205884 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.205892 | orchestrator | 2025-02-10 09:52:40.205900 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-02-10 09:52:40.205908 | orchestrator | Monday 10 February 2025 09:47:54 +0000 (0:00:01.832) 0:04:55.447 ******* 2025-02-10 09:52:40.205916 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:52:40.205925 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:52:40.205936 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.205941 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:52:40.205946 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:52:40.205955 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:52:40.205960 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.205965 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:52:40.206009 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:52:40.206050 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:52:40.206062 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.206074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.206079 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:52:40.206084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.206090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.206094 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.206100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.206105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.206115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.206125 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.206131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.206136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.206141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.206145 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.206150 | orchestrator | 2025-02-10 09:52:40.206155 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-02-10 09:52:40.206160 | orchestrator | Monday 10 February 2025 09:47:57 +0000 (0:00:02.697) 0:04:58.144 ******* 2025-02-10 09:52:40.206165 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.206170 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.206175 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.206180 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:52:40.206185 | orchestrator | 2025-02-10 09:52:40.206191 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-02-10 09:52:40.206196 | orchestrator | Monday 10 February 2025 09:47:58 +0000 (0:00:01.082) 0:04:59.227 ******* 2025-02-10 09:52:40.206201 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-02-10 09:52:40.206207 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-02-10 09:52:40.206212 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-02-10 09:52:40.206217 | orchestrator | 2025-02-10 09:52:40.206223 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-02-10 09:52:40.206228 | orchestrator | Monday 10 February 2025 09:47:59 +0000 (0:00:01.064) 0:05:00.292 ******* 2025-02-10 09:52:40.206232 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-02-10 09:52:40.206237 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-02-10 09:52:40.206242 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-02-10 09:52:40.206247 | orchestrator | 2025-02-10 09:52:40.206251 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-02-10 09:52:40.206257 | orchestrator | Monday 10 February 2025 09:48:00 +0000 (0:00:01.281) 0:05:01.573 ******* 2025-02-10 09:52:40.206266 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:52:40.206272 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:52:40.206277 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:52:40.206282 | orchestrator | 2025-02-10 09:52:40.206287 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-02-10 09:52:40.206292 | orchestrator | Monday 10 February 2025 09:48:01 +0000 (0:00:00.796) 0:05:02.370 ******* 2025-02-10 09:52:40.206297 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:52:40.206302 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:52:40.206307 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:52:40.206312 | orchestrator | 2025-02-10 09:52:40.206317 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-02-10 09:52:40.206322 | orchestrator | Monday 10 February 2025 09:48:01 +0000 (0:00:00.616) 0:05:02.986 ******* 2025-02-10 09:52:40.206327 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-02-10 09:52:40.206341 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-02-10 09:52:40.206347 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-02-10 09:52:40.206357 | orchestrator | 2025-02-10 09:52:40.206363 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-02-10 09:52:40.206368 | orchestrator | Monday 10 February 2025 09:48:03 +0000 (0:00:01.510) 0:05:04.497 ******* 2025-02-10 09:52:40.206372 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-02-10 09:52:40.206378 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-02-10 09:52:40.206383 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-02-10 09:52:40.206387 | orchestrator | 2025-02-10 09:52:40.206392 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-02-10 09:52:40.206397 | orchestrator | Monday 10 February 2025 09:48:04 +0000 (0:00:01.473) 0:05:05.970 ******* 2025-02-10 09:52:40.206402 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-02-10 09:52:40.206407 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-02-10 09:52:40.206412 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-02-10 09:52:40.206416 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-02-10 09:52:40.206421 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-02-10 09:52:40.206426 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-02-10 09:52:40.206431 | orchestrator | 2025-02-10 09:52:40.206435 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-02-10 09:52:40.206440 | orchestrator | Monday 10 February 2025 09:48:12 +0000 (0:00:07.084) 0:05:13.054 ******* 2025-02-10 09:52:40.206445 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:52:40.206450 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:52:40.206455 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:52:40.206460 | orchestrator | 2025-02-10 09:52:40.206465 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-02-10 09:52:40.206470 | orchestrator | Monday 10 February 2025 09:48:12 +0000 (0:00:00.520) 0:05:13.575 ******* 2025-02-10 09:52:40.206474 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:52:40.206479 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:52:40.206484 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:52:40.206489 | orchestrator | 2025-02-10 09:52:40.206494 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-02-10 09:52:40.206499 | orchestrator | Monday 10 February 2025 09:48:13 +0000 (0:00:00.609) 0:05:14.184 ******* 2025-02-10 09:52:40.206504 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:52:40.206509 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:52:40.206513 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:52:40.206518 | orchestrator | 2025-02-10 09:52:40.206525 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-02-10 09:52:40.206530 | orchestrator | Monday 10 February 2025 09:48:14 +0000 (0:00:01.698) 0:05:15.883 ******* 2025-02-10 09:52:40.206536 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-02-10 09:52:40.206553 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-02-10 09:52:40.206559 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-02-10 09:52:40.206564 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-02-10 09:52:40.206569 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-02-10 09:52:40.206574 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-02-10 09:52:40.206579 | orchestrator | 2025-02-10 09:52:40.206585 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-02-10 09:52:40.206589 | orchestrator | Monday 10 February 2025 09:48:19 +0000 (0:00:04.277) 0:05:20.161 ******* 2025-02-10 09:52:40.206597 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-02-10 09:52:40.206602 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-02-10 09:52:40.206607 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-02-10 09:52:40.206612 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-02-10 09:52:40.206620 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:52:40.206628 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-02-10 09:52:40.206636 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:52:40.206643 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-02-10 09:52:40.206651 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:52:40.206659 | orchestrator | 2025-02-10 09:52:40.206667 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-02-10 09:52:40.206674 | orchestrator | Monday 10 February 2025 09:48:22 +0000 (0:00:03.881) 0:05:24.042 ******* 2025-02-10 09:52:40.206682 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:52:40.206690 | orchestrator | 2025-02-10 09:52:40.206698 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-02-10 09:52:40.206706 | orchestrator | Monday 10 February 2025 09:48:23 +0000 (0:00:00.202) 0:05:24.245 ******* 2025-02-10 09:52:40.206713 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:52:40.206720 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:52:40.206729 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:52:40.206738 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.206746 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.206754 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.206762 | orchestrator | 2025-02-10 09:52:40.206771 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-02-10 09:52:40.206786 | orchestrator | Monday 10 February 2025 09:48:24 +0000 (0:00:01.177) 0:05:25.423 ******* 2025-02-10 09:52:40.206791 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-02-10 09:52:40.206797 | orchestrator | 2025-02-10 09:52:40.206802 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-02-10 09:52:40.206807 | orchestrator | Monday 10 February 2025 09:48:24 +0000 (0:00:00.476) 0:05:25.900 ******* 2025-02-10 09:52:40.206812 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:52:40.206817 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:52:40.206852 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:52:40.206860 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.206868 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.206876 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.206889 | orchestrator | 2025-02-10 09:52:40.206899 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-02-10 09:52:40.206907 | orchestrator | Monday 10 February 2025 09:48:25 +0000 (0:00:01.133) 0:05:27.033 ******* 2025-02-10 09:52:40.206923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:52:40.206933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:52:40.206943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:52:40.206952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:52:40.206991 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-02-10 09:52:40.207001 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-02-10 09:52:40.207014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:52:40.207020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:52:40.207025 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-02-10 09:52:40.207036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-10 09:52:40.207045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.207060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:52:40.207070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-10 09:52:40.207079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.207088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:52:40.207098 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-02-10 09:52:40.207114 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.207127 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.207133 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:52:40.207138 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.207143 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-02-10 09:52:40.207148 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.207154 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.207164 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:52:40.207174 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.207180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-10 09:52:40.207185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.207190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:52:40.207195 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-02-10 09:52:40.207201 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.207211 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.207221 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:52:40.207226 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.207232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.207239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.207247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.207262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.207275 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.207284 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.207292 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.207300 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.207309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.207330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.207340 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.207348 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.207356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.207366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.207374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.207389 | orchestrator | 2025-02-10 09:52:40.207398 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-02-10 09:52:40.207407 | orchestrator | Monday 10 February 2025 09:48:30 +0000 (0:00:04.470) 0:05:31.504 ******* 2025-02-10 09:52:40.207420 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:52:40.207430 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:52:40.207439 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.207448 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.207463 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:52:40.207472 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.207519 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:52:40.207531 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:52:40.207540 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.207548 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.207556 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:52:40.207565 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.207584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:52:40.207593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:52:40.207601 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:52:40.207609 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:52:40.207617 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.207631 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.207640 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:52:40.207658 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.207670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:52:40.207678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:52:40.207687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:52:40.207733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:52:40.207750 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.207760 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.207768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-10 09:52:40.207777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.207785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:52:40.207799 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.207815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-10 09:52:40.207829 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.207839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.207847 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.207855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:52:40.207869 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.207885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-10 09:52:40.207901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.208029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:52:40.208047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.208056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.208074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.208095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.208112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.208121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.208129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.208138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.208153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.208162 | orchestrator | 2025-02-10 09:52:40.208171 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-02-10 09:52:40.208179 | orchestrator | Monday 10 February 2025 09:48:39 +0000 (0:00:09.243) 0:05:40.747 ******* 2025-02-10 09:52:40.208186 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:52:40.208194 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:52:40.208202 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:52:40.208210 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.208218 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.208226 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.208235 | orchestrator | 2025-02-10 09:52:40.208244 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-02-10 09:52:40.208251 | orchestrator | Monday 10 February 2025 09:48:42 +0000 (0:00:02.624) 0:05:43.372 ******* 2025-02-10 09:52:40.208259 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-02-10 09:52:40.208268 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-02-10 09:52:40.208276 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-02-10 09:52:40.208284 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-02-10 09:52:40.208292 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-02-10 09:52:40.208301 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.208309 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-02-10 09:52:40.208317 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.208326 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-02-10 09:52:40.208341 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.208350 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-02-10 09:52:40.208358 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-02-10 09:52:40.208367 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-02-10 09:52:40.208376 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-02-10 09:52:40.208383 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-02-10 09:52:40.208392 | orchestrator | 2025-02-10 09:52:40.208401 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-02-10 09:52:40.208409 | orchestrator | Monday 10 February 2025 09:48:49 +0000 (0:00:06.914) 0:05:50.286 ******* 2025-02-10 09:52:40.208418 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:52:40.208427 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:52:40.208435 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:52:40.208444 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.208452 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.208467 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.208476 | orchestrator | 2025-02-10 09:52:40.208485 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-02-10 09:52:40.208494 | orchestrator | Monday 10 February 2025 09:48:50 +0000 (0:00:00.793) 0:05:51.080 ******* 2025-02-10 09:52:40.208503 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-02-10 09:52:40.208511 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-02-10 09:52:40.208519 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-02-10 09:52:40.208528 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-02-10 09:52:40.208536 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-02-10 09:52:40.208544 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-02-10 09:52:40.208552 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-02-10 09:52:40.208561 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-02-10 09:52:40.208570 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-02-10 09:52:40.208579 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-02-10 09:52:40.208587 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.208595 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-02-10 09:52:40.208604 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.208612 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-02-10 09:52:40.208620 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.208629 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-02-10 09:52:40.208637 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-02-10 09:52:40.208645 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-02-10 09:52:40.208654 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-02-10 09:52:40.208663 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-02-10 09:52:40.208672 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-02-10 09:52:40.208680 | orchestrator | 2025-02-10 09:52:40.208688 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-02-10 09:52:40.208697 | orchestrator | Monday 10 February 2025 09:48:57 +0000 (0:00:07.848) 0:05:58.929 ******* 2025-02-10 09:52:40.208705 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-02-10 09:52:40.208713 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-02-10 09:52:40.208721 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-02-10 09:52:40.208735 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-02-10 09:52:40.208743 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-02-10 09:52:40.208763 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-02-10 09:52:40.208779 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-02-10 09:52:40.208788 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-02-10 09:52:40.208796 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-02-10 09:52:40.208805 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-02-10 09:52:40.208813 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-02-10 09:52:40.208821 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-02-10 09:52:40.208829 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-02-10 09:52:40.208838 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.208847 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-02-10 09:52:40.208856 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.208864 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-02-10 09:52:40.208873 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.208881 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-02-10 09:52:40.208891 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-02-10 09:52:40.208899 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-02-10 09:52:40.208908 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-02-10 09:52:40.208916 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-02-10 09:52:40.208925 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-02-10 09:52:40.208933 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-02-10 09:52:40.208940 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-02-10 09:52:40.208948 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-02-10 09:52:40.208957 | orchestrator | 2025-02-10 09:52:40.208966 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-02-10 09:52:40.208991 | orchestrator | Monday 10 February 2025 09:49:11 +0000 (0:00:13.618) 0:06:12.547 ******* 2025-02-10 09:52:40.209001 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:52:40.209009 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:52:40.209017 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:52:40.209025 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.209033 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.209042 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.209049 | orchestrator | 2025-02-10 09:52:40.209058 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-02-10 09:52:40.209063 | orchestrator | Monday 10 February 2025 09:49:12 +0000 (0:00:00.972) 0:06:13.520 ******* 2025-02-10 09:52:40.209068 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:52:40.209073 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:52:40.209078 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:52:40.209083 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.209088 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.209093 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.209097 | orchestrator | 2025-02-10 09:52:40.209103 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-02-10 09:52:40.209111 | orchestrator | Monday 10 February 2025 09:49:13 +0000 (0:00:01.276) 0:06:14.797 ******* 2025-02-10 09:52:40.209119 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.209136 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.209144 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.209152 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:52:40.209159 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:52:40.209167 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:52:40.209175 | orchestrator | 2025-02-10 09:52:40.209182 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-02-10 09:52:40.209190 | orchestrator | Monday 10 February 2025 09:49:17 +0000 (0:00:03.505) 0:06:18.302 ******* 2025-02-10 09:52:40.209215 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:52:40.209234 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:52:40.209243 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:52:40.209252 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.209262 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.209276 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:52:40.209285 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:52:40.209298 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.209314 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.209322 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.209331 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.209345 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.209354 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:52:40.209362 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:52:40.209376 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.209389 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.209398 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.209406 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:52:40.209415 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:52:40.209429 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:52:40.209438 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.209452 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.209465 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:52:40.209479 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.209488 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.209500 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.209509 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:52:40.209518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:52:40.209532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:52:40.209538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.209543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.209548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:52:40.209556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.209562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.209567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.209571 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.209584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:52:40.209590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:52:40.209595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:52:40.209604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.209609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.209614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:52:40.209628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:52:40.209633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.209639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.209647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.209652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.209657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.209663 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.209675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:52:40.209690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.209699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.209712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.209721 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.209729 | orchestrator | 2025-02-10 09:52:40.209736 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-02-10 09:52:40.209745 | orchestrator | Monday 10 February 2025 09:49:19 +0000 (0:00:02.249) 0:06:20.552 ******* 2025-02-10 09:52:40.209750 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-02-10 09:52:40.209755 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-02-10 09:52:40.209760 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:52:40.209765 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-02-10 09:52:40.209770 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-02-10 09:52:40.209774 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:52:40.209780 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-02-10 09:52:40.209785 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-02-10 09:52:40.209789 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:52:40.209795 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-02-10 09:52:40.209799 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-02-10 09:52:40.209804 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.209809 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-02-10 09:52:40.209814 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-02-10 09:52:40.209819 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.209824 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-02-10 09:52:40.209829 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-02-10 09:52:40.209834 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.209843 | orchestrator | 2025-02-10 09:52:40.209848 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-02-10 09:52:40.209853 | orchestrator | Monday 10 February 2025 09:49:20 +0000 (0:00:01.161) 0:06:21.713 ******* 2025-02-10 09:52:40.209862 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-02-10 09:52:40.209872 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-02-10 09:52:40.209882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:52:40.209887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:52:40.209893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:52:40.209903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:52:40.209915 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-02-10 09:52:40.209921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:52:40.209926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:52:40.209931 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-02-10 09:52:40.209936 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.209945 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.209954 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:52:40.209959 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.209985 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-02-10 09:52:40.209995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-10 09:52:40.210004 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.210038 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.210062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.210069 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:52:40.210074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:52:40.210079 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.210090 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-02-10 09:52:40.210096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-10 09:52:40.210101 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.210113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.210161 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.210167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:52:40.210176 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:52:40.210181 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.210187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-10 09:52:40.210198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:52:40.210215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:52:40.210224 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.210233 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.210241 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.210257 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.210267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.210286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.210295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.210305 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.210318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.210323 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.210332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.210340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:52:40.210345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.210355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.210361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:52:40.210367 | orchestrator | 2025-02-10 09:52:40.210376 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-02-10 09:52:40.210385 | orchestrator | Monday 10 February 2025 09:49:25 +0000 (0:00:05.011) 0:06:26.724 ******* 2025-02-10 09:52:40.210393 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:52:40.210401 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:52:40.210409 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:52:40.211748 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.211781 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.211789 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.211797 | orchestrator | 2025-02-10 09:52:40.211805 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-02-10 09:52:40.211814 | orchestrator | Monday 10 February 2025 09:49:26 +0000 (0:00:01.073) 0:06:27.797 ******* 2025-02-10 09:52:40.211820 | orchestrator | 2025-02-10 09:52:40.211825 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-02-10 09:52:40.211838 | orchestrator | Monday 10 February 2025 09:49:26 +0000 (0:00:00.149) 0:06:27.947 ******* 2025-02-10 09:52:40.211843 | orchestrator | 2025-02-10 09:52:40.211848 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-02-10 09:52:40.211853 | orchestrator | Monday 10 February 2025 09:49:27 +0000 (0:00:00.335) 0:06:28.282 ******* 2025-02-10 09:52:40.211857 | orchestrator | 2025-02-10 09:52:40.211862 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-02-10 09:52:40.211867 | orchestrator | Monday 10 February 2025 09:49:27 +0000 (0:00:00.133) 0:06:28.416 ******* 2025-02-10 09:52:40.211872 | orchestrator | 2025-02-10 09:52:40.211879 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-02-10 09:52:40.211887 | orchestrator | Monday 10 February 2025 09:49:27 +0000 (0:00:00.357) 0:06:28.773 ******* 2025-02-10 09:52:40.211894 | orchestrator | 2025-02-10 09:52:40.211902 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-02-10 09:52:40.211909 | orchestrator | Monday 10 February 2025 09:49:27 +0000 (0:00:00.131) 0:06:28.905 ******* 2025-02-10 09:52:40.211917 | orchestrator | 2025-02-10 09:52:40.211925 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-02-10 09:52:40.211933 | orchestrator | Monday 10 February 2025 09:49:28 +0000 (0:00:00.421) 0:06:29.326 ******* 2025-02-10 09:52:40.211942 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:40.211960 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:52:40.211968 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:52:40.211990 | orchestrator | 2025-02-10 09:52:40.211998 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-02-10 09:52:40.212006 | orchestrator | Monday 10 February 2025 09:49:37 +0000 (0:00:08.947) 0:06:38.273 ******* 2025-02-10 09:52:40.212013 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:52:40.212021 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:40.212028 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:52:40.212036 | orchestrator | 2025-02-10 09:52:40.212043 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-02-10 09:52:40.212051 | orchestrator | Monday 10 February 2025 09:49:52 +0000 (0:00:15.601) 0:06:53.874 ******* 2025-02-10 09:52:40.212060 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:52:40.212068 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:52:40.212076 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:52:40.212084 | orchestrator | 2025-02-10 09:52:40.212092 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-02-10 09:52:40.212100 | orchestrator | Monday 10 February 2025 09:50:09 +0000 (0:00:16.209) 0:07:10.084 ******* 2025-02-10 09:52:40.212108 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:52:40.212116 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:52:40.212123 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:52:40.212132 | orchestrator | 2025-02-10 09:52:40.212140 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-02-10 09:52:40.212147 | orchestrator | Monday 10 February 2025 09:50:33 +0000 (0:00:24.918) 0:07:35.003 ******* 2025-02-10 09:52:40.212155 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:52:40.212162 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:52:40.212166 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:52:40.212171 | orchestrator | 2025-02-10 09:52:40.212176 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-02-10 09:52:40.212192 | orchestrator | Monday 10 February 2025 09:50:34 +0000 (0:00:00.914) 0:07:35.917 ******* 2025-02-10 09:52:40.212197 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:52:40.212202 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:52:40.212207 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:52:40.212211 | orchestrator | 2025-02-10 09:52:40.212216 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-02-10 09:52:40.212221 | orchestrator | Monday 10 February 2025 09:50:35 +0000 (0:00:01.096) 0:07:37.014 ******* 2025-02-10 09:52:40.212226 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:52:40.212230 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:52:40.212235 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:52:40.212240 | orchestrator | 2025-02-10 09:52:40.212245 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute-ironic container] ************ 2025-02-10 09:52:40.212250 | orchestrator | Monday 10 February 2025 09:51:00 +0000 (0:00:24.318) 0:08:01.333 ******* 2025-02-10 09:52:40.212254 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:40.212259 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:52:40.212264 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:52:40.212269 | orchestrator | 2025-02-10 09:52:40.212273 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-02-10 09:52:40.212279 | orchestrator | Monday 10 February 2025 09:51:13 +0000 (0:00:12.741) 0:08:14.074 ******* 2025-02-10 09:52:40.212284 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:52:40.212288 | orchestrator | 2025-02-10 09:52:40.212293 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-02-10 09:52:40.212298 | orchestrator | Monday 10 February 2025 09:51:13 +0000 (0:00:00.165) 0:08:14.239 ******* 2025-02-10 09:52:40.212303 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:52:40.212307 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:52:40.212316 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.212321 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.212326 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.212331 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-02-10 09:52:40.212337 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-02-10 09:52:40.212342 | orchestrator | 2025-02-10 09:52:40.212347 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-02-10 09:52:40.212352 | orchestrator | Monday 10 February 2025 09:51:36 +0000 (0:00:23.463) 0:08:37.702 ******* 2025-02-10 09:52:40.212357 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:52:40.212361 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.212366 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.212371 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:52:40.212376 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.212381 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:52:40.212385 | orchestrator | 2025-02-10 09:52:40.212390 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-02-10 09:52:40.212398 | orchestrator | Monday 10 February 2025 09:51:55 +0000 (0:00:18.446) 0:08:56.149 ******* 2025-02-10 09:52:40.212403 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.212408 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:52:40.212412 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:52:40.212417 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.212424 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.212432 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2025-02-10 09:52:40.212440 | orchestrator | 2025-02-10 09:52:40.212447 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-02-10 09:52:40.212454 | orchestrator | Monday 10 February 2025 09:52:01 +0000 (0:00:06.609) 0:09:02.759 ******* 2025-02-10 09:52:40.212462 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-02-10 09:52:40.212474 | orchestrator | 2025-02-10 09:52:40.212481 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-02-10 09:52:40.212489 | orchestrator | Monday 10 February 2025 09:52:14 +0000 (0:00:12.585) 0:09:15.344 ******* 2025-02-10 09:52:40.212496 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-02-10 09:52:40.212503 | orchestrator | 2025-02-10 09:52:40.212516 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-02-10 09:52:40.212523 | orchestrator | Monday 10 February 2025 09:52:15 +0000 (0:00:01.244) 0:09:16.589 ******* 2025-02-10 09:52:40.212530 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:52:40.212539 | orchestrator | 2025-02-10 09:52:40.212546 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-02-10 09:52:40.212554 | orchestrator | Monday 10 February 2025 09:52:16 +0000 (0:00:01.219) 0:09:17.809 ******* 2025-02-10 09:52:40.212562 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-02-10 09:52:40.212570 | orchestrator | 2025-02-10 09:52:40.212578 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-02-10 09:52:40.212585 | orchestrator | Monday 10 February 2025 09:52:27 +0000 (0:00:10.800) 0:09:28.610 ******* 2025-02-10 09:52:40.212593 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:52:40.212601 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:52:40.212610 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:52:40.212622 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:52:40.212631 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:52:40.212639 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:52:40.212647 | orchestrator | 2025-02-10 09:52:40.212654 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-02-10 09:52:40.212662 | orchestrator | 2025-02-10 09:52:40.212670 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-02-10 09:52:40.212678 | orchestrator | Monday 10 February 2025 09:52:30 +0000 (0:00:02.576) 0:09:31.186 ******* 2025-02-10 09:52:40.212686 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:52:40.212694 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:52:40.212702 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:52:40.212710 | orchestrator | 2025-02-10 09:52:40.212718 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-02-10 09:52:40.212725 | orchestrator | 2025-02-10 09:52:40.212733 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-02-10 09:52:40.212741 | orchestrator | Monday 10 February 2025 09:52:31 +0000 (0:00:01.448) 0:09:32.635 ******* 2025-02-10 09:52:40.212746 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.212751 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.212756 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.212761 | orchestrator | 2025-02-10 09:52:40.212765 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-02-10 09:52:40.212770 | orchestrator | 2025-02-10 09:52:40.212775 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-02-10 09:52:40.212780 | orchestrator | Monday 10 February 2025 09:52:32 +0000 (0:00:00.775) 0:09:33.411 ******* 2025-02-10 09:52:40.212785 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-02-10 09:52:40.212789 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-02-10 09:52:40.212794 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-02-10 09:52:40.212799 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-02-10 09:52:40.212804 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-02-10 09:52:40.212809 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-02-10 09:52:40.212813 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:52:40.212818 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-02-10 09:52:40.212823 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-02-10 09:52:40.212828 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-02-10 09:52:40.212838 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-02-10 09:52:40.212843 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-02-10 09:52:40.212848 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-02-10 09:52:40.212852 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:52:40.212857 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-02-10 09:52:40.212862 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-02-10 09:52:40.212867 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-02-10 09:52:40.212872 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-02-10 09:52:40.212877 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-02-10 09:52:40.212882 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-02-10 09:52:40.212886 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:52:40.212891 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-02-10 09:52:40.212896 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-02-10 09:52:40.212901 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-02-10 09:52:40.212905 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-02-10 09:52:40.212910 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-02-10 09:52:40.212915 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-02-10 09:52:40.212920 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.212931 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-02-10 09:52:40.212936 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-02-10 09:52:40.212941 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-02-10 09:52:40.212946 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-02-10 09:52:40.212950 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-02-10 09:52:40.212955 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-02-10 09:52:40.212960 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.212968 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-02-10 09:52:40.212992 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-02-10 09:52:40.213001 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-02-10 09:52:40.213015 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-02-10 09:52:40.213024 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-02-10 09:52:40.213032 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-02-10 09:52:40.213041 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.213048 | orchestrator | 2025-02-10 09:52:40.213052 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-02-10 09:52:40.213057 | orchestrator | 2025-02-10 09:52:40.213062 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-02-10 09:52:40.213069 | orchestrator | Monday 10 February 2025 09:52:34 +0000 (0:00:01.927) 0:09:35.338 ******* 2025-02-10 09:52:40.213077 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-02-10 09:52:40.213085 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-02-10 09:52:40.213093 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.213101 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-02-10 09:52:40.213109 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-02-10 09:52:40.213117 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.213124 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-02-10 09:52:40.213132 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-02-10 09:52:40.213139 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.213146 | orchestrator | 2025-02-10 09:52:40.213157 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-02-10 09:52:40.213161 | orchestrator | 2025-02-10 09:52:40.213166 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-02-10 09:52:40.213171 | orchestrator | Monday 10 February 2025 09:52:35 +0000 (0:00:00.946) 0:09:36.285 ******* 2025-02-10 09:52:40.213176 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.213181 | orchestrator | 2025-02-10 09:52:40.213185 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-02-10 09:52:40.213190 | orchestrator | 2025-02-10 09:52:40.213195 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-02-10 09:52:40.213200 | orchestrator | Monday 10 February 2025 09:52:36 +0000 (0:00:00.958) 0:09:37.244 ******* 2025-02-10 09:52:40.213204 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:52:40.213209 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:52:40.213214 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:52:40.213219 | orchestrator | 2025-02-10 09:52:40.213223 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:52:40.213228 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:52:40.213235 | orchestrator | testbed-node-0 : ok=55  changed=36  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-02-10 09:52:40.213240 | orchestrator | testbed-node-1 : ok=28  changed=20  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-02-10 09:52:40.213245 | orchestrator | testbed-node-2 : ok=28  changed=20  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-02-10 09:52:40.213250 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-02-10 09:52:40.213256 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-02-10 09:52:40.213264 | orchestrator | testbed-node-5 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-02-10 09:52:40.213272 | orchestrator | 2025-02-10 09:52:40.213279 | orchestrator | 2025-02-10 09:52:40.213290 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:52:40.213298 | orchestrator | Monday 10 February 2025 09:52:36 +0000 (0:00:00.679) 0:09:37.923 ******* 2025-02-10 09:52:40.213306 | orchestrator | =============================================================================== 2025-02-10 09:52:40.213314 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 28.60s 2025-02-10 09:52:40.213322 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 24.92s 2025-02-10 09:52:40.213330 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 24.32s 2025-02-10 09:52:40.213338 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 23.46s 2025-02-10 09:52:40.213345 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 22.02s 2025-02-10 09:52:40.213353 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 18.45s 2025-02-10 09:52:40.213360 | orchestrator | nova-cell : Create cell ------------------------------------------------ 17.08s 2025-02-10 09:52:40.213365 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 16.43s 2025-02-10 09:52:40.213369 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 16.21s 2025-02-10 09:52:40.213374 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 15.60s 2025-02-10 09:52:40.213379 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.26s 2025-02-10 09:52:40.213384 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 14.17s 2025-02-10 09:52:40.213393 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.66s 2025-02-10 09:52:40.213401 | orchestrator | nova-cell : Copying files for nova-ssh --------------------------------- 13.62s 2025-02-10 09:52:43.231176 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 12.91s 2025-02-10 09:52:43.231327 | orchestrator | nova-cell : Restart nova-compute-ironic container ---------------------- 12.74s 2025-02-10 09:52:43.231346 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.59s 2025-02-10 09:52:43.231362 | orchestrator | nova : Restart nova-api container -------------------------------------- 11.27s 2025-02-10 09:52:43.231377 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.80s 2025-02-10 09:52:43.231391 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 9.32s 2025-02-10 09:52:43.231405 | orchestrator | 2025-02-10 09:52:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:52:43.231441 | orchestrator | 2025-02-10 09:52:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:52:46.269896 | orchestrator | 2025-02-10 09:52:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:52:49.308678 | orchestrator | 2025-02-10 09:52:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:52:52.348885 | orchestrator | 2025-02-10 09:52:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:52:55.395510 | orchestrator | 2025-02-10 09:52:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:52:58.437050 | orchestrator | 2025-02-10 09:52:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:53:01.470423 | orchestrator | 2025-02-10 09:53:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:53:04.514448 | orchestrator | 2025-02-10 09:53:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:53:07.560186 | orchestrator | 2025-02-10 09:53:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:53:10.600748 | orchestrator | 2025-02-10 09:53:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:53:13.657001 | orchestrator | 2025-02-10 09:53:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:53:16.699176 | orchestrator | 2025-02-10 09:53:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:53:19.741270 | orchestrator | 2025-02-10 09:53:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:53:22.779751 | orchestrator | 2025-02-10 09:53:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:53:25.815313 | orchestrator | 2025-02-10 09:53:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:53:28.868119 | orchestrator | 2025-02-10 09:53:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:53:31.927585 | orchestrator | 2025-02-10 09:53:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:53:34.966184 | orchestrator | 2025-02-10 09:53:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:53:38.032533 | orchestrator | 2025-02-10 09:53:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:53:41.075348 | orchestrator | 2025-02-10 09:53:41.379664 | orchestrator | 2025-02-10 09:53:41.386172 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Mon Feb 10 09:53:41 UTC 2025 2025-02-10 09:53:41.395122 | orchestrator | 2025-02-10 09:53:52.366672 | orchestrator | changed 2025-02-10 09:53:52.710651 | 2025-02-10 09:53:52.710813 | TASK [Bootstrap services] 2025-02-10 09:53:53.447967 | orchestrator | 2025-02-10 09:53:53.459070 | orchestrator | # BOOTSTRAP 2025-02-10 09:53:53.459139 | orchestrator | 2025-02-10 09:53:53.459156 | orchestrator | + set -e 2025-02-10 09:53:53.459196 | orchestrator | + echo 2025-02-10 09:53:53.459212 | orchestrator | + echo '# BOOTSTRAP' 2025-02-10 09:53:53.459228 | orchestrator | + echo 2025-02-10 09:53:53.459249 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-02-10 09:53:53.459284 | orchestrator | + set -e 2025-02-10 09:54:00.259804 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-02-10 09:54:00.265448 | orchestrator | 2025-02-10 09:54:00 | INFO  | Flavor SCS-1V-4 created 2025-02-10 09:54:00.435283 | orchestrator | 2025-02-10 09:54:00 | INFO  | Flavor SCS-2V-8 created 2025-02-10 09:54:00.595197 | orchestrator | 2025-02-10 09:54:00 | INFO  | Flavor SCS-4V-16 created 2025-02-10 09:54:00.728845 | orchestrator | 2025-02-10 09:54:00 | INFO  | Flavor SCS-8V-32 created 2025-02-10 09:54:00.850589 | orchestrator | 2025-02-10 09:54:00 | INFO  | Flavor SCS-1V-2 created 2025-02-10 09:54:01.008620 | orchestrator | 2025-02-10 09:54:01 | INFO  | Flavor SCS-2V-4 created 2025-02-10 09:54:01.146471 | orchestrator | 2025-02-10 09:54:01 | INFO  | Flavor SCS-4V-8 created 2025-02-10 09:54:01.257700 | orchestrator | 2025-02-10 09:54:01 | INFO  | Flavor SCS-8V-16 created 2025-02-10 09:54:01.390714 | orchestrator | 2025-02-10 09:54:01 | INFO  | Flavor SCS-16V-32 created 2025-02-10 09:54:01.519199 | orchestrator | 2025-02-10 09:54:01 | INFO  | Flavor SCS-1V-8 created 2025-02-10 09:54:01.624619 | orchestrator | 2025-02-10 09:54:01 | INFO  | Flavor SCS-2V-16 created 2025-02-10 09:54:01.748937 | orchestrator | 2025-02-10 09:54:01 | INFO  | Flavor SCS-4V-32 created 2025-02-10 09:54:01.878108 | orchestrator | 2025-02-10 09:54:01 | INFO  | Flavor SCS-1L-1 created 2025-02-10 09:54:02.014948 | orchestrator | 2025-02-10 09:54:02 | INFO  | Flavor SCS-2V-4-20s created 2025-02-10 09:54:02.141668 | orchestrator | 2025-02-10 09:54:02 | INFO  | Flavor SCS-4V-16-100s created 2025-02-10 09:54:02.281790 | orchestrator | 2025-02-10 09:54:02 | INFO  | Flavor SCS-1V-4-10 created 2025-02-10 09:54:02.396716 | orchestrator | 2025-02-10 09:54:02 | INFO  | Flavor SCS-2V-8-20 created 2025-02-10 09:54:02.528400 | orchestrator | 2025-02-10 09:54:02 | INFO  | Flavor SCS-4V-16-50 created 2025-02-10 09:54:02.662644 | orchestrator | 2025-02-10 09:54:02 | INFO  | Flavor SCS-8V-32-100 created 2025-02-10 09:54:02.768580 | orchestrator | 2025-02-10 09:54:02 | INFO  | Flavor SCS-1V-2-5 created 2025-02-10 09:54:02.901250 | orchestrator | 2025-02-10 09:54:02 | INFO  | Flavor SCS-2V-4-10 created 2025-02-10 09:54:03.052207 | orchestrator | 2025-02-10 09:54:03 | INFO  | Flavor SCS-4V-8-20 created 2025-02-10 09:54:03.156031 | orchestrator | 2025-02-10 09:54:03 | INFO  | Flavor SCS-8V-16-50 created 2025-02-10 09:54:03.304320 | orchestrator | 2025-02-10 09:54:03 | INFO  | Flavor SCS-16V-32-100 created 2025-02-10 09:54:03.439230 | orchestrator | 2025-02-10 09:54:03 | INFO  | Flavor SCS-1V-8-20 created 2025-02-10 09:54:03.558867 | orchestrator | 2025-02-10 09:54:03 | INFO  | Flavor SCS-2V-16-50 created 2025-02-10 09:54:03.702546 | orchestrator | 2025-02-10 09:54:03 | INFO  | Flavor SCS-4V-32-100 created 2025-02-10 09:54:03.868331 | orchestrator | 2025-02-10 09:54:03 | INFO  | Flavor SCS-1L-1-5 created 2025-02-10 09:54:05.998579 | orchestrator | 2025-02-10 09:54:05 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-02-10 09:54:06.090613 | orchestrator | 2025-02-10 09:54:06 | INFO  | Task 89c3c1fb-cf19-4607-89df-277a22ece2ab (bootstrap-basic) was prepared for execution. 2025-02-10 09:54:08.769741 | orchestrator | 2025-02-10 09:54:06 | INFO  | It takes a moment until task 89c3c1fb-cf19-4607-89df-277a22ece2ab (bootstrap-basic) has been started and output is visible here. 2025-02-10 09:54:08.769990 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2025-02-10 09:54:09.343874 | orchestrator | -vvvv to see details 2025-02-10 09:54:09.344027 | orchestrator | 2025-02-10 09:54:09.344476 | orchestrator | PLAY [Prepare masquerading on the manager node] ******************************** 2025-02-10 09:54:09.345049 | orchestrator | 2025-02-10 09:54:09.346647 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-10 09:54:09.976047 | orchestrator | fatal: [testbed-manager]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.5\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.5: Permission denied (publickey).\r\n", "unreachable": true} 2025-02-10 09:54:09.976426 | orchestrator | 2025-02-10 09:54:09.977720 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:54:09.979601 | orchestrator | 2025-02-10 09:54:09 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:54:09.979754 | orchestrator | 2025-02-10 09:54:09 | INFO  | Please wait and do not abort execution. 2025-02-10 09:54:09.982656 | orchestrator | testbed-manager : ok=0 changed=0 unreachable=1  failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:54:09.983636 | orchestrator | 2025-02-10 09:54:10.221861 | orchestrator | 2025-02-10 09:54:10 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-02-10 09:54:10.225948 | orchestrator | 2025-02-10 09:54:10 | INFO  | Task 986b4ea3-ee55-4898-a780-51c7803eb82c (bootstrap-basic) was prepared for execution. 2025-02-10 09:54:13.850268 | orchestrator | 2025-02-10 09:54:10 | INFO  | It takes a moment until task 986b4ea3-ee55-4898-a780-51c7803eb82c (bootstrap-basic) has been started and output is visible here. 2025-02-10 09:54:13.850477 | orchestrator | 2025-02-10 09:54:13.850587 | orchestrator | PLAY [Prepare masquerading on the manager node] ******************************** 2025-02-10 09:54:13.851463 | orchestrator | 2025-02-10 09:54:13.852482 | orchestrator | TASK [Accept FORWARD on the management interface (incoming)] ******************* 2025-02-10 09:54:13.853738 | orchestrator | Monday 10 February 2025 09:54:13 +0000 (0:00:00.157) 0:00:00.157 ******* 2025-02-10 09:54:14.553266 | orchestrator | ok: [testbed-manager] 2025-02-10 09:54:14.555383 | orchestrator | 2025-02-10 09:54:15.097765 | orchestrator | TASK [Accept FORWARD on the management interface (outgoing)] ******************* 2025-02-10 09:54:15.097943 | orchestrator | Monday 10 February 2025 09:54:14 +0000 (0:00:00.694) 0:00:00.852 ******* 2025-02-10 09:54:15.097981 | orchestrator | ok: [testbed-manager] 2025-02-10 09:54:15.098633 | orchestrator | 2025-02-10 09:54:15.100093 | orchestrator | TASK [Masquerade traffic on the management interface] ************************** 2025-02-10 09:54:15.100308 | orchestrator | Monday 10 February 2025 09:54:15 +0000 (0:00:00.557) 0:00:01.410 ******* 2025-02-10 09:54:15.555094 | orchestrator | ok: [testbed-manager] 2025-02-10 09:54:15.555794 | orchestrator | 2025-02-10 09:54:15.556932 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-02-10 09:54:15.557759 | orchestrator | 2025-02-10 09:54:15.558332 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-10 09:54:15.559181 | orchestrator | Monday 10 February 2025 09:54:15 +0000 (0:00:00.455) 0:00:01.866 ******* 2025-02-10 09:54:17.062499 | orchestrator | ok: [localhost] 2025-02-10 09:54:17.062724 | orchestrator | 2025-02-10 09:54:17.063361 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-02-10 09:54:17.064089 | orchestrator | Monday 10 February 2025 09:54:17 +0000 (0:00:01.506) 0:00:03.373 ******* 2025-02-10 09:54:26.223394 | orchestrator | ok: [localhost] 2025-02-10 09:54:26.223709 | orchestrator | 2025-02-10 09:54:26.223742 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-02-10 09:54:33.432898 | orchestrator | Monday 10 February 2025 09:54:26 +0000 (0:00:09.160) 0:00:12.533 ******* 2025-02-10 09:54:33.433047 | orchestrator | changed: [localhost] 2025-02-10 09:54:33.433317 | orchestrator | 2025-02-10 09:54:33.433334 | orchestrator | TASK [Get volume type local] *************************************************** 2025-02-10 09:54:33.434259 | orchestrator | Monday 10 February 2025 09:54:33 +0000 (0:00:07.208) 0:00:19.742 ******* 2025-02-10 09:54:40.473957 | orchestrator | ok: [localhost] 2025-02-10 09:54:40.474356 | orchestrator | 2025-02-10 09:54:40.474393 | orchestrator | TASK [Create volume type local] ************************************************ 2025-02-10 09:54:40.474419 | orchestrator | Monday 10 February 2025 09:54:40 +0000 (0:00:07.039) 0:00:26.782 ******* 2025-02-10 09:54:46.303406 | orchestrator | changed: [localhost] 2025-02-10 09:54:46.303593 | orchestrator | 2025-02-10 09:54:46.303624 | orchestrator | TASK [Create public network] *************************************************** 2025-02-10 09:54:46.304076 | orchestrator | Monday 10 February 2025 09:54:46 +0000 (0:00:05.830) 0:00:32.613 ******* 2025-02-10 09:54:51.577346 | orchestrator | changed: [localhost] 2025-02-10 09:54:51.577977 | orchestrator | 2025-02-10 09:54:51.578068 | orchestrator | TASK [Set public network to default] ******************************************* 2025-02-10 09:54:51.578678 | orchestrator | Monday 10 February 2025 09:54:51 +0000 (0:00:05.275) 0:00:37.888 ******* 2025-02-10 09:54:56.733414 | orchestrator | changed: [localhost] 2025-02-10 09:54:56.735122 | orchestrator | 2025-02-10 09:54:56.735212 | orchestrator | TASK [Create public subnet] **************************************************** 2025-02-10 09:55:00.931002 | orchestrator | Monday 10 February 2025 09:54:56 +0000 (0:00:05.155) 0:00:43.044 ******* 2025-02-10 09:55:00.931169 | orchestrator | changed: [localhost] 2025-02-10 09:55:04.679552 | orchestrator | 2025-02-10 09:55:04.679691 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-02-10 09:55:04.679709 | orchestrator | Monday 10 February 2025 09:55:00 +0000 (0:00:04.196) 0:00:47.241 ******* 2025-02-10 09:55:04.679742 | orchestrator | changed: [localhost] 2025-02-10 09:55:04.680256 | orchestrator | 2025-02-10 09:55:04.680803 | orchestrator | TASK [Create manager role] ***************************************************** 2025-02-10 09:55:04.681827 | orchestrator | Monday 10 February 2025 09:55:04 +0000 (0:00:03.748) 0:00:50.989 ******* 2025-02-10 09:55:08.104360 | orchestrator | ok: [localhost] 2025-02-10 09:55:08.104925 | orchestrator | 2025-02-10 09:55:08.104959 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:55:08.104972 | orchestrator | 2025-02-10 09:55:08 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:55:08.105022 | orchestrator | 2025-02-10 09:55:08 | INFO  | Please wait and do not abort execution. 2025-02-10 09:55:08.105039 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:55:08.105239 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:55:08.107205 | orchestrator | 2025-02-10 09:55:08.107646 | orchestrator | Monday 10 February 2025 09:55:08 +0000 (0:00:03.424) 0:00:54.414 ******* 2025-02-10 09:55:08.107890 | orchestrator | =============================================================================== 2025-02-10 09:55:08.108236 | orchestrator | Get volume type LUKS ---------------------------------------------------- 9.16s 2025-02-10 09:55:08.108699 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.21s 2025-02-10 09:55:08.108878 | orchestrator | Get volume type local --------------------------------------------------- 7.04s 2025-02-10 09:55:08.109104 | orchestrator | Create volume type local ------------------------------------------------ 5.83s 2025-02-10 09:55:08.109736 | orchestrator | Create public network --------------------------------------------------- 5.28s 2025-02-10 09:55:08.110146 | orchestrator | Set public network to default ------------------------------------------- 5.16s 2025-02-10 09:55:08.111580 | orchestrator | Create public subnet ---------------------------------------------------- 4.20s 2025-02-10 09:55:08.111696 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.75s 2025-02-10 09:55:08.111736 | orchestrator | Create manager role ----------------------------------------------------- 3.43s 2025-02-10 09:55:08.111746 | orchestrator | Gathering Facts --------------------------------------------------------- 1.51s 2025-02-10 09:55:08.111759 | orchestrator | Accept FORWARD on the management interface (incoming) ------------------- 0.69s 2025-02-10 09:55:08.112421 | orchestrator | Accept FORWARD on the management interface (outgoing) ------------------- 0.56s 2025-02-10 09:55:08.112488 | orchestrator | Masquerade traffic on the management interface -------------------------- 0.46s 2025-02-10 09:55:14.027416 | orchestrator | 2025-02-10 09:55:14 | INFO  | Processing image 'Cirros 0.6.2' 2025-02-10 09:55:14.252751 | orchestrator | 2025-02-10 09:55:14 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-02-10 09:55:16.000808 | orchestrator | 2025-02-10 09:55:14 | INFO  | Importing image Cirros 0.6.2 2025-02-10 09:55:16.000949 | orchestrator | 2025-02-10 09:55:14 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-02-10 09:55:16.000971 | orchestrator | 2025-02-10 09:55:15 | INFO  | Waiting for image to leave queued state... 2025-02-10 09:55:18.067535 | orchestrator | 2025-02-10 09:55:18 | INFO  | Waiting for import to complete... 2025-02-10 09:55:28.198494 | orchestrator | 2025-02-10 09:55:28 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-02-10 09:55:28.575104 | orchestrator | 2025-02-10 09:55:28 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-02-10 09:55:28.840868 | orchestrator | 2025-02-10 09:55:28 | INFO  | Setting internal_version = 0.6.2 2025-02-10 09:55:28.840997 | orchestrator | 2025-02-10 09:55:28 | INFO  | Setting image_original_user = cirros 2025-02-10 09:55:28.841067 | orchestrator | 2025-02-10 09:55:28 | INFO  | Adding tag os:cirros 2025-02-10 09:55:28.841104 | orchestrator | 2025-02-10 09:55:28 | INFO  | Setting property architecture: x86_64 2025-02-10 09:55:29.149613 | orchestrator | 2025-02-10 09:55:29 | INFO  | Setting property hw_disk_bus: scsi 2025-02-10 09:55:29.505755 | orchestrator | 2025-02-10 09:55:29 | INFO  | Setting property hw_rng_model: virtio 2025-02-10 09:55:29.777148 | orchestrator | 2025-02-10 09:55:29 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-02-10 09:55:30.023067 | orchestrator | 2025-02-10 09:55:30 | INFO  | Setting property hw_watchdog_action: reset 2025-02-10 09:55:30.242119 | orchestrator | 2025-02-10 09:55:30 | INFO  | Setting property hypervisor_type: qemu 2025-02-10 09:55:30.430100 | orchestrator | 2025-02-10 09:55:30 | INFO  | Setting property os_distro: cirros 2025-02-10 09:55:30.623950 | orchestrator | 2025-02-10 09:55:30 | INFO  | Setting property replace_frequency: never 2025-02-10 09:55:30.881271 | orchestrator | 2025-02-10 09:55:30 | INFO  | Setting property uuid_validity: none 2025-02-10 09:55:31.095706 | orchestrator | 2025-02-10 09:55:31 | INFO  | Setting property provided_until: none 2025-02-10 09:55:31.337613 | orchestrator | 2025-02-10 09:55:31 | INFO  | Setting property image_description: Cirros 2025-02-10 09:55:31.570470 | orchestrator | 2025-02-10 09:55:31 | INFO  | Setting property image_name: Cirros 2025-02-10 09:55:31.792070 | orchestrator | 2025-02-10 09:55:31 | INFO  | Setting property internal_version: 0.6.2 2025-02-10 09:55:32.017670 | orchestrator | 2025-02-10 09:55:32 | INFO  | Setting property image_original_user: cirros 2025-02-10 09:55:32.249329 | orchestrator | 2025-02-10 09:55:32 | INFO  | Setting property os_version: 0.6.2 2025-02-10 09:55:32.496002 | orchestrator | 2025-02-10 09:55:32 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-02-10 09:55:32.734458 | orchestrator | 2025-02-10 09:55:32 | INFO  | Setting property image_build_date: 2023-05-30 2025-02-10 09:55:32.995935 | orchestrator | 2025-02-10 09:55:32 | INFO  | Checking status of 'Cirros 0.6.2' 2025-02-10 09:55:33.400170 | orchestrator | 2025-02-10 09:55:32 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-02-10 09:55:33.400325 | orchestrator | 2025-02-10 09:55:32 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-02-10 09:55:33.400383 | orchestrator | 2025-02-10 09:55:33 | INFO  | Processing image 'Cirros 0.6.3' 2025-02-10 09:55:33.592867 | orchestrator | 2025-02-10 09:55:33 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-02-10 09:55:34.866925 | orchestrator | 2025-02-10 09:55:33 | INFO  | Importing image Cirros 0.6.3 2025-02-10 09:55:34.867060 | orchestrator | 2025-02-10 09:55:33 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-02-10 09:55:34.867112 | orchestrator | 2025-02-10 09:55:34 | INFO  | Waiting for image to leave queued state... 2025-02-10 09:55:36.923180 | orchestrator | 2025-02-10 09:55:36 | INFO  | Waiting for import to complete... 2025-02-10 09:55:47.094770 | orchestrator | 2025-02-10 09:55:47 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-02-10 09:55:47.566152 | orchestrator | 2025-02-10 09:55:47 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-02-10 09:55:47.790535 | orchestrator | 2025-02-10 09:55:47 | INFO  | Setting internal_version = 0.6.3 2025-02-10 09:55:47.790663 | orchestrator | 2025-02-10 09:55:47 | INFO  | Setting image_original_user = cirros 2025-02-10 09:55:47.790683 | orchestrator | 2025-02-10 09:55:47 | INFO  | Adding tag os:cirros 2025-02-10 09:55:47.790717 | orchestrator | 2025-02-10 09:55:47 | INFO  | Setting property architecture: x86_64 2025-02-10 09:55:48.125599 | orchestrator | 2025-02-10 09:55:48 | INFO  | Setting property hw_disk_bus: scsi 2025-02-10 09:55:48.347549 | orchestrator | 2025-02-10 09:55:48 | INFO  | Setting property hw_rng_model: virtio 2025-02-10 09:55:48.588642 | orchestrator | 2025-02-10 09:55:48 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-02-10 09:55:48.807607 | orchestrator | 2025-02-10 09:55:48 | INFO  | Setting property hw_watchdog_action: reset 2025-02-10 09:55:49.032860 | orchestrator | 2025-02-10 09:55:49 | INFO  | Setting property hypervisor_type: qemu 2025-02-10 09:55:49.255670 | orchestrator | 2025-02-10 09:55:49 | INFO  | Setting property os_distro: cirros 2025-02-10 09:55:49.487512 | orchestrator | 2025-02-10 09:55:49 | INFO  | Setting property replace_frequency: never 2025-02-10 09:55:50.023169 | orchestrator | 2025-02-10 09:55:50 | INFO  | Setting property uuid_validity: none 2025-02-10 09:55:50.246633 | orchestrator | 2025-02-10 09:55:50 | INFO  | Setting property provided_until: none 2025-02-10 09:55:50.496111 | orchestrator | 2025-02-10 09:55:50 | INFO  | Setting property image_description: Cirros 2025-02-10 09:55:50.707153 | orchestrator | 2025-02-10 09:55:50 | INFO  | Setting property image_name: Cirros 2025-02-10 09:55:50.926678 | orchestrator | 2025-02-10 09:55:50 | INFO  | Setting property internal_version: 0.6.3 2025-02-10 09:55:51.153750 | orchestrator | 2025-02-10 09:55:51 | INFO  | Setting property image_original_user: cirros 2025-02-10 09:55:51.391874 | orchestrator | 2025-02-10 09:55:51 | INFO  | Setting property os_version: 0.6.3 2025-02-10 09:55:51.622869 | orchestrator | 2025-02-10 09:55:51 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-02-10 09:55:51.826780 | orchestrator | 2025-02-10 09:55:51 | INFO  | Setting property image_build_date: 2024-09-26 2025-02-10 09:55:52.053422 | orchestrator | 2025-02-10 09:55:52 | INFO  | Checking status of 'Cirros 0.6.3' 2025-02-10 09:55:53.138497 | orchestrator | 2025-02-10 09:55:52 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-02-10 09:55:53.138647 | orchestrator | 2025-02-10 09:55:52 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-02-10 09:55:53.138701 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-02-10 09:55:54.899865 | orchestrator | 2025-02-10 09:55:54 | INFO  | date: 2025-02-10 2025-02-10 09:55:54.918340 | orchestrator | 2025-02-10 09:55:54 | INFO  | image: octavia-amphora-haproxy-2024.1.20250210.qcow2 2025-02-10 09:55:54.918466 | orchestrator | 2025-02-10 09:55:54 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.1.20250210.qcow2 2025-02-10 09:55:54.918585 | orchestrator | 2025-02-10 09:55:54 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.1.20250210.qcow2.CHECKSUM 2025-02-10 09:55:54.918638 | orchestrator | 2025-02-10 09:55:54 | INFO  | checksum: 818d90cbc1a4e91780f1e125e5e94e12877510b54e6f64cd7dcd858ef37722f9 2025-02-10 09:55:57.318399 | orchestrator | 2025-02-10 09:55:57 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-02-10' 2025-02-10 09:55:57.334516 | orchestrator | 2025-02-10 09:55:57 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.1.20250210.qcow2: 200 2025-02-10 09:55:58.845117 | orchestrator | 2025-02-10 09:55:57 | INFO  | Importing image OpenStack Octavia Amphora 2025-02-10 2025-02-10 09:55:58.845261 | orchestrator | 2025-02-10 09:55:57 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.1.20250210.qcow2 2025-02-10 09:55:58.845300 | orchestrator | 2025-02-10 09:55:58 | INFO  | Waiting for image to leave queued state... 2025-02-10 09:56:00.883026 | orchestrator | 2025-02-10 09:56:00 | INFO  | Waiting for import to complete... 2025-02-10 09:56:11.023921 | orchestrator | 2025-02-10 09:56:11 | INFO  | Waiting for import to complete... 2025-02-10 09:56:21.127533 | orchestrator | 2025-02-10 09:56:21 | INFO  | Waiting for import to complete... 2025-02-10 09:56:31.244752 | orchestrator | 2025-02-10 09:56:31 | INFO  | Waiting for import to complete... 2025-02-10 09:56:41.368276 | orchestrator | 2025-02-10 09:56:41 | INFO  | Waiting for import to complete... 2025-02-10 09:56:51.519580 | orchestrator | 2025-02-10 09:56:51 | INFO  | Import of 'OpenStack Octavia Amphora 2025-02-10' successfully completed, reloading images 2025-02-10 09:56:51.878982 | orchestrator | 2025-02-10 09:56:51 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-02-10' 2025-02-10 09:56:51.880890 | orchestrator | 2025-02-10 09:56:51 | INFO  | Setting internal_version = 2025-02-10 2025-02-10 09:56:52.082482 | orchestrator | 2025-02-10 09:56:51 | INFO  | Setting image_original_user = ubuntu 2025-02-10 09:56:52.082647 | orchestrator | 2025-02-10 09:56:51 | INFO  | Adding tag amphora 2025-02-10 09:56:52.082687 | orchestrator | 2025-02-10 09:56:52 | INFO  | Adding tag os:ubuntu 2025-02-10 09:56:52.358304 | orchestrator | 2025-02-10 09:56:52 | INFO  | Setting property architecture: x86_64 2025-02-10 09:56:52.614490 | orchestrator | 2025-02-10 09:56:52 | INFO  | Setting property hw_disk_bus: scsi 2025-02-10 09:56:52.844014 | orchestrator | 2025-02-10 09:56:52 | INFO  | Setting property hw_rng_model: virtio 2025-02-10 09:56:53.070328 | orchestrator | 2025-02-10 09:56:53 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-02-10 09:56:53.280201 | orchestrator | 2025-02-10 09:56:53 | INFO  | Setting property hw_watchdog_action: reset 2025-02-10 09:56:53.521874 | orchestrator | 2025-02-10 09:56:53 | INFO  | Setting property hypervisor_type: qemu 2025-02-10 09:56:53.759167 | orchestrator | 2025-02-10 09:56:53 | INFO  | Setting property os_distro: ubuntu 2025-02-10 09:56:54.035159 | orchestrator | 2025-02-10 09:56:54 | INFO  | Setting property replace_frequency: quarterly 2025-02-10 09:56:54.235600 | orchestrator | 2025-02-10 09:56:54 | INFO  | Setting property uuid_validity: last-1 2025-02-10 09:56:54.443161 | orchestrator | 2025-02-10 09:56:54 | INFO  | Setting property provided_until: none 2025-02-10 09:56:54.716067 | orchestrator | 2025-02-10 09:56:54 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-02-10 09:56:54.943108 | orchestrator | 2025-02-10 09:56:54 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-02-10 09:56:55.198120 | orchestrator | 2025-02-10 09:56:55 | INFO  | Setting property internal_version: 2025-02-10 2025-02-10 09:56:55.460467 | orchestrator | 2025-02-10 09:56:55 | INFO  | Setting property image_original_user: ubuntu 2025-02-10 09:56:55.665339 | orchestrator | 2025-02-10 09:56:55 | INFO  | Setting property os_version: 2025-02-10 2025-02-10 09:56:55.907603 | orchestrator | 2025-02-10 09:56:55 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.1.20250210.qcow2 2025-02-10 09:56:56.152639 | orchestrator | 2025-02-10 09:56:56 | INFO  | Setting property image_build_date: 2025-02-10 2025-02-10 09:56:56.357434 | orchestrator | 2025-02-10 09:56:56 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-02-10' 2025-02-10 09:56:56.532671 | orchestrator | 2025-02-10 09:56:56 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-02-10' 2025-02-10 09:56:56.532845 | orchestrator | 2025-02-10 09:56:56 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-02-10 09:56:56.532984 | orchestrator | 2025-02-10 09:56:56 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-02-10 09:56:57.009856 | orchestrator | 2025-02-10 09:56:56 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-02-10 09:56:57.010171 | orchestrator | 2025-02-10 09:56:56 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-02-10 09:56:57.403481 | orchestrator | changed 2025-02-10 09:56:57.431594 | 2025-02-10 09:56:57.431718 | TASK [Run checks] 2025-02-10 09:56:58.189243 | orchestrator | + set -e 2025-02-10 09:56:58.190161 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-02-10 09:56:58.190215 | orchestrator | ++ export INTERACTIVE=false 2025-02-10 09:56:58.190236 | orchestrator | ++ INTERACTIVE=false 2025-02-10 09:56:58.190285 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-02-10 09:56:58.190305 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-02-10 09:56:58.190322 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-02-10 09:56:58.190363 | orchestrator | +++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2025-02-10 09:56:58.215947 | orchestrator | 2025-02-10 09:56:58.216332 | orchestrator | # CHECK 2025-02-10 09:56:58.216359 | orchestrator | 2025-02-10 09:56:58.216369 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-02-10 09:56:58.216379 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-02-10 09:56:58.216387 | orchestrator | + echo 2025-02-10 09:56:58.216395 | orchestrator | + echo '# CHECK' 2025-02-10 09:56:58.216403 | orchestrator | + echo 2025-02-10 09:56:58.216412 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-02-10 09:56:58.216428 | orchestrator | ++ semver 8.1.0 5.0.0 2025-02-10 09:56:58.258588 | orchestrator | 2025-02-10 09:57:00.277948 | orchestrator | ## Containers @ testbed-manager 2025-02-10 09:57:00.279555 | orchestrator | 2025-02-10 09:57:00.279596 | orchestrator | + [[ 1 -eq -1 ]] 2025-02-10 09:57:00.279611 | orchestrator | + echo 2025-02-10 09:57:00.279626 | orchestrator | + echo '## Containers @ testbed-manager' 2025-02-10 09:57:00.279643 | orchestrator | + echo 2025-02-10 09:57:00.279657 | orchestrator | + osism container testbed-manager ps 2025-02-10 09:57:00.279715 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-02-10 09:57:00.279742 | orchestrator | 28b7b294129b nexus.testbed.osism.xyz:8193/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_blackbox_exporter 2025-02-10 09:57:00.279791 | orchestrator | 2a4096865785 nexus.testbed.osism.xyz:8193/kolla/release/prometheus-alertmanager:0.27.0.20241206 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_alertmanager 2025-02-10 09:57:00.279845 | orchestrator | 2a322452705a nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_cadvisor 2025-02-10 09:57:00.279865 | orchestrator | a5c517203585 nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2025-02-10 09:57:00.279880 | orchestrator | 165a1d7e160d nexus.testbed.osism.xyz:8193/kolla/release/prometheus-v2-server:2.50.1.20241206 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_server 2025-02-10 09:57:00.279895 | orchestrator | 3009eb7b7917 quay.io/osism/cephclient:17.2.7 "/usr/bin/dumb-init …" 20 minutes ago Up 19 minutes cephclient 2025-02-10 09:57:00.279912 | orchestrator | bfc43713829c nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206 "dumb-init --single-…" 36 minutes ago Up 36 minutes cron 2025-02-10 09:57:00.279927 | orchestrator | d223470a28eb nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206 "dumb-init --single-…" 36 minutes ago Up 36 minutes kolla_toolbox 2025-02-10 09:57:00.279972 | orchestrator | 7947a84fd857 nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206 "dumb-init --single-…" 36 minutes ago Up 36 minutes fluentd 2025-02-10 09:57:00.279988 | orchestrator | a383906e0591 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 37 minutes ago Up 36 minutes (healthy) 80/tcp phpmyadmin 2025-02-10 09:57:00.280002 | orchestrator | fc6e91871ea6 quay.io/osism/openstackclient:7.2.1 "/usr/bin/dumb-init …" 37 minutes ago Up 37 minutes openstackclient 2025-02-10 09:57:00.280020 | orchestrator | cb03b1eaeea6 quay.io/osism/homer:v24.05.1 "/bin/sh /entrypoint…" 38 minutes ago Up 37 minutes (healthy) 8080/tcp homer 2025-02-10 09:57:00.280036 | orchestrator | 57e2d5b10494 ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 57 minutes ago Up 56 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-02-10 09:57:00.280070 | orchestrator | 690bc27e7030 quay.io/osism/nexus:3.75.1 "/opt/sonatype/nexus…" 59 minutes ago Up 59 minutes (healthy) 8081/tcp, 192.168.16.5:8191-8199->8191-8199/tcp nexus 2025-02-10 09:57:00.280086 | orchestrator | 50ca87bfc194 quay.io/osism/osism-kubernetes:8.1.0 "/entrypoint.sh osis…" About an hour ago Up About an hour (healthy) osism-kubernetes 2025-02-10 09:57:00.280101 | orchestrator | 51bc48fc5038 quay.io/osism/kolla-ansible:8.1.0 "/entrypoint.sh osis…" About an hour ago Up About an hour (healthy) kolla-ansible 2025-02-10 09:57:00.280115 | orchestrator | cbd34edb2700 quay.io/osism/ceph-ansible:8.1.0 "/entrypoint.sh osis…" About an hour ago Up About an hour (healthy) ceph-ansible 2025-02-10 09:57:00.280129 | orchestrator | ef2307b69d9e quay.io/osism/osism-ansible:8.1.0 "/entrypoint.sh osis…" About an hour ago Up About an hour (healthy) osism-ansible 2025-02-10 09:57:00.280143 | orchestrator | 63f17fc4b906 quay.io/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" About an hour ago Up About an hour (healthy) 8000/tcp manager-ara-server-1 2025-02-10 09:57:00.280157 | orchestrator | ce9f3f5b4cee quay.io/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" About an hour ago Up About an hour (healthy) manager-watchdog-1 2025-02-10 09:57:00.280171 | orchestrator | 4263d6b5c861 quay.io/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" About an hour ago Up About an hour (healthy) manager-flower-1 2025-02-10 09:57:00.280204 | orchestrator | d2fe6c32bca3 quay.io/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" About an hour ago Up About an hour (healthy) manager-openstack-1 2025-02-10 09:57:00.280229 | orchestrator | 9c0a8935499e quay.io/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" About an hour ago Up About an hour (healthy) manager-beat-1 2025-02-10 09:57:00.280244 | orchestrator | b163248a4682 quay.io/osism/inventory-reconciler:8.1.0 "/sbin/tini -- /entr…" About an hour ago Up About an hour (healthy) manager-inventory_reconciler-1 2025-02-10 09:57:00.280258 | orchestrator | ee20d0dd49c7 redis:7.4.1-alpine "docker-entrypoint.s…" About an hour ago Up About an hour (healthy) 6379/tcp manager-redis-1 2025-02-10 09:57:00.280280 | orchestrator | 2b30ebc78539 quay.io/osism/osism-netbox:0.20241219.2 "/usr/bin/tini -- os…" About an hour ago Up About an hour (healthy) manager-netbox-1 2025-02-10 09:57:00.280294 | orchestrator | 7d754502cd94 quay.io/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" About an hour ago Up About an hour (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-02-10 09:57:00.280308 | orchestrator | e8bde0df21fb quay.io/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" About an hour ago Up About an hour (healthy) manager-conductor-1 2025-02-10 09:57:00.280331 | orchestrator | eb746765d620 quay.io/osism/osism:0.20241219.2 "/usr/bin/tini -- sl…" About an hour ago Up About an hour (healthy) osismclient 2025-02-10 09:57:00.546224 | orchestrator | 2f47b737ce3d mariadb:11.6.2 "docker-entrypoint.s…" About an hour ago Up About an hour (healthy) 3306/tcp manager-mariadb-1 2025-02-10 09:57:00.546412 | orchestrator | a0d19874a80d quay.io/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" About an hour ago Up About an hour (healthy) manager-listener-1 2025-02-10 09:57:00.546462 | orchestrator | a3515f7e462b quay.io/osism/netbox:v4.1.7 "/opt/netbox/venv/bi…" About an hour ago Up About an hour (healthy) netbox-netbox-worker-1 2025-02-10 09:57:00.546492 | orchestrator | 2b8c6bb2969d quay.io/osism/netbox:v4.1.7 "/usr/bin/tini -- /o…" About an hour ago Up About an hour (healthy) netbox-netbox-1 2025-02-10 09:57:00.546517 | orchestrator | b3fd962b24fa postgres:16.6-alpine "docker-entrypoint.s…" About an hour ago Up About an hour (healthy) 5432/tcp netbox-postgres-1 2025-02-10 09:57:00.546547 | orchestrator | 07235fd64dff redis:7.4.2-alpine "docker-entrypoint.s…" About an hour ago Up About an hour (healthy) 6379/tcp netbox-redis-1 2025-02-10 09:57:00.546608 | orchestrator | 7513fc3ab599 traefik:v3.2.1 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-02-10 09:57:00.546666 | orchestrator | 2025-02-10 09:57:02.501593 | orchestrator | ## Images @ testbed-manager 2025-02-10 09:57:02.501714 | orchestrator | 2025-02-10 09:57:02.501728 | orchestrator | + echo 2025-02-10 09:57:02.501738 | orchestrator | + echo '## Images @ testbed-manager' 2025-02-10 09:57:02.501798 | orchestrator | + echo 2025-02-10 09:57:02.501809 | orchestrator | + osism container testbed-manager images 2025-02-10 09:57:02.501839 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-02-10 09:57:02.786139 | orchestrator | postgres 16.6-alpine 5c773214aed7 6 days ago 275MB 2025-02-10 09:57:02.786276 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 2 weeks ago 571MB 2025-02-10 09:57:02.786399 | orchestrator | quay.io/osism/osism-ansible 8.1.0 fbdf816df1ca 3 weeks ago 924MB 2025-02-10 09:57:02.786415 | orchestrator | quay.io/osism/kolla-ansible 8.1.0 fa8c7404b311 3 weeks ago 573MB 2025-02-10 09:57:02.786425 | orchestrator | quay.io/osism/osism-kubernetes 8.1.0 a238c518a1e4 3 weeks ago 1.04GB 2025-02-10 09:57:02.786438 | orchestrator | quay.io/osism/inventory-reconciler 8.1.0 277348be619d 3 weeks ago 269MB 2025-02-10 09:57:02.786447 | orchestrator | quay.io/osism/ceph-ansible 8.1.0 18e18155d88d 3 weeks ago 495MB 2025-02-10 09:57:02.786458 | orchestrator | redis 7.4.2-alpine ee33180a8437 4 weeks ago 41.4MB 2025-02-10 09:57:02.786468 | orchestrator | quay.io/osism/openstackclient 7.2.1 e0c9f377c3ff 5 weeks ago 254MB 2025-02-10 09:57:02.786477 | orchestrator | quay.io/osism/osism-netbox 0.20241219.2 a39845bd553f 7 weeks ago 556MB 2025-02-10 09:57:02.786487 | orchestrator | quay.io/osism/osism 0.20241219.2 888c3668e512 7 weeks ago 530MB 2025-02-10 09:57:02.786497 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/cron 3.0.20241206 fa0403ca8610 2 months ago 249MB 2025-02-10 09:57:02.786507 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/fluentd 5.0.5.20241206 e3fd4f4e5a0d 2 months ago 520MB 2025-02-10 09:57:02.786516 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox 18.3.0.20241206 dc41193bf46f 2 months ago 623MB 2025-02-10 09:57:02.786526 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/prometheus-v2-server 2.50.1.20241206 8b7e674c8b5c 2 months ago 750MB 2025-02-10 09:57:02.786537 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor 0.49.1.20241206 c20fa7440200 2 months ago 343MB 2025-02-10 09:57:02.786552 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter 1.7.0.20241206 330df749af4b 2 months ago 288MB 2025-02-10 09:57:02.786568 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/prometheus-alertmanager 0.27.0.20241206 499e10bbe122 2 months ago 383MB 2025-02-10 09:57:02.786584 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/prometheus-blackbox-exporter 0.24.0.20241206 c41b03f07192 2 months ago 290MB 2025-02-10 09:57:02.786598 | orchestrator | quay.io/osism/nexus 3.75.1 9f9123784e53 2 months ago 635MB 2025-02-10 09:57:02.786647 | orchestrator | quay.io/osism/netbox v4.1.7 c3c57209ccd2 2 months ago 760MB 2025-02-10 09:57:02.786664 | orchestrator | mariadb 11.6.2 027c25922bcd 2 months ago 415MB 2025-02-10 09:57:02.786676 | orchestrator | traefik v3.2.1 639ddc3cec97 2 months ago 189MB 2025-02-10 09:57:02.786697 | orchestrator | hashicorp/vault 1.18.2 197c8072f1e8 2 months ago 466MB 2025-02-10 09:57:02.786707 | orchestrator | redis 7.4.1-alpine 87b460005bd3 4 months ago 46.7MB 2025-02-10 09:57:02.786716 | orchestrator | quay.io/osism/ara-server 1.7.2 bb44122eb176 5 months ago 300MB 2025-02-10 09:57:02.786725 | orchestrator | ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 8 months ago 146MB 2025-02-10 09:57:02.786735 | orchestrator | quay.io/osism/homer v24.05.1 7c94b173f5c5 8 months ago 10.3MB 2025-02-10 09:57:02.786745 | orchestrator | quay.io/osism/cephclient 17.2.7 e627a21d61c9 11 months ago 458MB 2025-02-10 09:57:02.786797 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-02-10 09:57:02.839521 | orchestrator | ++ semver 8.1.0 5.0.0 2025-02-10 09:57:02.839663 | orchestrator | 2025-02-10 09:57:04.917779 | orchestrator | ## Containers @ testbed-node-0 2025-02-10 09:57:04.917888 | orchestrator | 2025-02-10 09:57:04.917897 | orchestrator | + [[ 1 -eq -1 ]] 2025-02-10 09:57:04.917903 | orchestrator | + echo 2025-02-10 09:57:04.917909 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-02-10 09:57:04.917918 | orchestrator | + echo 2025-02-10 09:57:04.917924 | orchestrator | + osism container testbed-node-0 ps 2025-02-10 09:57:04.919561 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-02-10 09:57:04.919579 | orchestrator | e2f2a2cf4470 nexus.testbed.osism.xyz:8193/kolla/release/octavia-worker:14.0.1.20241206 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-02-10 09:57:04.919588 | orchestrator | a1c44ce31923 nexus.testbed.osism.xyz:8193/kolla/release/octavia-housekeeping:14.0.1.20241206 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-02-10 09:57:04.919603 | orchestrator | d84d5d63349c nexus.testbed.osism.xyz:8193/kolla/release/octavia-health-manager:14.0.1.20241206 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-02-10 09:57:04.919618 | orchestrator | 35dccc64a7b6 nexus.testbed.osism.xyz:8193/kolla/release/octavia-driver-agent:14.0.1.20241206 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2025-02-10 09:57:04.919624 | orchestrator | cd115bb798cd nexus.testbed.osism.xyz:8193/kolla/release/octavia-api:14.0.1.20241206 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-02-10 09:57:04.919630 | orchestrator | 7b67f3310801 nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) nova_compute_ironic 2025-02-10 09:57:04.919637 | orchestrator | 87f362b2af5f nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-02-10 09:57:04.919643 | orchestrator | f33264e72a3c nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-02-10 09:57:04.919667 | orchestrator | 0b5b62346d89 nexus.testbed.osism.xyz:8193/kolla/release/grafana:11.4.0.20241206 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-02-10 09:57:04.919674 | orchestrator | 457fc93e3677 nexus.testbed.osism.xyz:8193/kolla/release/glance-api:28.1.1.20241206 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) glance_api 2025-02-10 09:57:04.919680 | orchestrator | e423482b565b nexus.testbed.osism.xyz:8193/kolla/release/nova-api:29.2.1.20241206 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_api 2025-02-10 09:57:04.919685 | orchestrator | b8c39628836d nexus.testbed.osism.xyz:8193/kolla/release/nova-scheduler:29.2.1.20241206 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-02-10 09:57:04.919691 | orchestrator | 7da2598ca8bd nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_elasticsearch_exporter 2025-02-10 09:57:04.919700 | orchestrator | 746aa0540dd9 nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-02-10 09:57:04.919706 | orchestrator | 75b187f0794a nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_cadvisor 2025-02-10 09:57:04.919712 | orchestrator | e6110e83d90a nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-02-10 09:57:04.919719 | orchestrator | ace7513ac35d nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter:0.14.2.20241206 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_memcached_exporter 2025-02-10 09:57:04.919731 | orchestrator | e78d6f807a46 nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_mysqld_exporter 2025-02-10 09:57:04.919745 | orchestrator | e9d08fd4d686 nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2025-02-10 09:57:04.919763 | orchestrator | bdf017ec879c nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) ironic_neutron_agent 2025-02-10 09:57:04.919772 | orchestrator | a24515b60009 nexus.testbed.osism.xyz:8193/kolla/release/magnum-conductor:18.0.1.20241206 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_conductor 2025-02-10 09:57:04.919778 | orchestrator | b2555227712b nexus.testbed.osism.xyz:8193/kolla/release/magnum-api:18.0.1.20241206 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_api 2025-02-10 09:57:04.919784 | orchestrator | 2c6faa7de039 nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) ironic_http 2025-02-10 09:57:04.919790 | orchestrator | d3eeaff7beae nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206 "dumb-init --single-…" 15 minutes ago Up 15 minutes ironic_tftp 2025-02-10 09:57:04.919796 | orchestrator | 1708ba491e77 nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) neutron_server 2025-02-10 09:57:04.919802 | orchestrator | dd8f1027425a nexus.testbed.osism.xyz:8193/kolla/release/ironic-inspector:12.1.1.20241206 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) ironic_inspector 2025-02-10 09:57:04.919812 | orchestrator | 339e73c31381 nexus.testbed.osism.xyz:8193/kolla/release/ironic-api:24.1.4.20241206 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) ironic_api 2025-02-10 09:57:04.919817 | orchestrator | cc7c44018aff nexus.testbed.osism.xyz:8193/kolla/release/ironic-conductor:24.1.4.20241206 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) ironic_conductor 2025-02-10 09:57:04.919823 | orchestrator | 31da18d419a4 nexus.testbed.osism.xyz:8193/kolla/release/placement-api:11.0.0.20241206 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) placement_api 2025-02-10 09:57:04.919829 | orchestrator | be0ffec19e09 nexus.testbed.osism.xyz:8193/kolla/release/designate-worker:18.0.1.20241206 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_worker 2025-02-10 09:57:04.919835 | orchestrator | b61422fa2a24 nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns:18.0.1.20241206 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_mdns 2025-02-10 09:57:04.919841 | orchestrator | f09d85a0f1fd nexus.testbed.osism.xyz:8193/kolla/release/designate-producer:18.0.1.20241206 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_producer 2025-02-10 09:57:04.919847 | orchestrator | 5a0ae170529d nexus.testbed.osism.xyz:8193/kolla/release/designate-central:18.0.1.20241206 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_central 2025-02-10 09:57:04.919853 | orchestrator | 3908d241e76c nexus.testbed.osism.xyz:8193/kolla/release/designate-api:18.0.1.20241206 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_api 2025-02-10 09:57:04.919859 | orchestrator | 9747cefb5142 nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9:18.0.1.20241206 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) designate_backend_bind9 2025-02-10 09:57:04.919865 | orchestrator | 516795bbd108 nexus.testbed.osism.xyz:8193/kolla/release/barbican-worker:18.0.1.20241206 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) barbican_worker 2025-02-10 09:57:04.919871 | orchestrator | e32bf344ce7a nexus.testbed.osism.xyz:8193/kolla/release/barbican-keystone-listener:18.0.1.20241206 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) barbican_keystone_listener 2025-02-10 09:57:04.919877 | orchestrator | b4931939cc9b nexus.testbed.osism.xyz:8193/kolla/release/barbican-api:18.0.1.20241206 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) barbican_api 2025-02-10 09:57:04.919886 | orchestrator | bc48ff26543c nexus.testbed.osism.xyz:8193/osism/ceph-daemon:17.2.7 "/opt/ceph-container…" 19 minutes ago Up 19 minutes ceph-mgr-testbed-node-0 2025-02-10 09:57:04.919892 | orchestrator | 53327fcddf01 nexus.testbed.osism.xyz:8193/kolla/release/keystone:25.0.1.20241206 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) keystone 2025-02-10 09:57:04.919898 | orchestrator | 64fa7c847542 nexus.testbed.osism.xyz:8193/kolla/release/keystone-fernet:25.0.1.20241206 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) keystone_fernet 2025-02-10 09:57:04.919907 | orchestrator | f5eddef0447a nexus.testbed.osism.xyz:8193/kolla/release/keystone-ssh:25.0.1.20241206 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) keystone_ssh 2025-02-10 09:57:04.919913 | orchestrator | 3b62420a13b7 nexus.testbed.osism.xyz:8193/kolla/release/horizon:24.0.1.20241206 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) horizon 2025-02-10 09:57:04.919923 | orchestrator | c47960bf7d23 nexus.testbed.osism.xyz:8193/kolla/release/mariadb-server:10.11.10.20241206 "dumb-init -- kolla_…" 24 minutes ago Up 24 minutes (healthy) mariadb 2025-02-10 09:57:04.919928 | orchestrator | 0036708012da nexus.testbed.osism.xyz:8193/kolla/release/mariadb-clustercheck:10.11.10.20241206 "dumb-init --single-…" 25 minutes ago Up 25 minutes mariadb_clustercheck 2025-02-10 09:57:04.919934 | orchestrator | 7ec1e474dc41 nexus.testbed.osism.xyz:8193/kolla/release/opensearch-dashboards:2.18.0.20241206 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) opensearch_dashboards 2025-02-10 09:57:04.919947 | orchestrator | 46ad68d595f5 nexus.testbed.osism.xyz:8193/osism/ceph-daemon:17.2.7 "/usr/bin/ceph-crash" 26 minutes ago Up 26 minutes ceph-crash-testbed-node-0 2025-02-10 09:57:04.919953 | orchestrator | 59e52d489c7d nexus.testbed.osism.xyz:8193/kolla/release/opensearch:2.18.0.20241206 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) opensearch 2025-02-10 09:57:04.919959 | orchestrator | 46716540b4d5 nexus.testbed.osism.xyz:8193/kolla/release/keepalived:2.2.4.20241206 "dumb-init --single-…" 27 minutes ago Up 27 minutes keepalived 2025-02-10 09:57:04.919965 | orchestrator | a6787ca52d25 nexus.testbed.osism.xyz:8193/kolla/release/haproxy:2.4.24.20241206 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) haproxy 2025-02-10 09:57:04.919970 | orchestrator | fc2754966ccc nexus.testbed.osism.xyz:8193/kolla/release/ovn-northd:24.3.4.20241206 "dumb-init --single-…" 32 minutes ago Up 32 minutes ovn_northd 2025-02-10 09:57:04.919976 | orchestrator | 6b52d6eae9e3 nexus.testbed.osism.xyz:8193/kolla/release/ovn-sb-db-server:24.3.4.20241206 "dumb-init --single-…" 32 minutes ago Up 32 minutes ovn_sb_db 2025-02-10 09:57:04.919982 | orchestrator | c114db2f75aa nexus.testbed.osism.xyz:8193/kolla/release/ovn-nb-db-server:24.3.4.20241206 "dumb-init --single-…" 32 minutes ago Up 32 minutes ovn_nb_db 2025-02-10 09:57:04.919988 | orchestrator | ff592b750b12 nexus.testbed.osism.xyz:8193/osism/ceph-daemon:17.2.7 "/opt/ceph-container…" 32 minutes ago Up 32 minutes ceph-mon-testbed-node-0 2025-02-10 09:57:04.919994 | orchestrator | 3f07c68bd8f5 nexus.testbed.osism.xyz:8193/kolla/release/ovn-controller:24.3.4.20241206 "dumb-init --single-…" 33 minutes ago Up 33 minutes ovn_controller 2025-02-10 09:57:04.920000 | orchestrator | 11e82f6b3484 nexus.testbed.osism.xyz:8193/kolla/release/rabbitmq:3.13.7.20241206 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) rabbitmq 2025-02-10 09:57:04.920006 | orchestrator | e788635db013 nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-vswitchd:3.3.0.20241206 "dumb-init --single-…" 35 minutes ago Up 34 minutes (healthy) openvswitch_vswitchd 2025-02-10 09:57:04.920011 | orchestrator | 11c8d1e3a119 nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-db-server:3.3.0.20241206 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) openvswitch_db 2025-02-10 09:57:04.920017 | orchestrator | fdb301258476 nexus.testbed.osism.xyz:8193/kolla/release/redis-sentinel:6.0.16.20241206 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) redis_sentinel 2025-02-10 09:57:04.920027 | orchestrator | 42640e0b7b1c nexus.testbed.osism.xyz:8193/kolla/release/redis:6.0.16.20241206 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) redis 2025-02-10 09:57:05.202441 | orchestrator | bafde704f4b0 nexus.testbed.osism.xyz:8193/kolla/release/memcached:1.6.14.20241206 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) memcached 2025-02-10 09:57:05.202605 | orchestrator | 32ca6dbac35c nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206 "dumb-init --single-…" 36 minutes ago Up 36 minutes cron 2025-02-10 09:57:05.202625 | orchestrator | e10c60eb90e4 nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206 "dumb-init --single-…" 36 minutes ago Up 36 minutes kolla_toolbox 2025-02-10 09:57:05.202655 | orchestrator | f6732bcb244f nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206 "dumb-init --single-…" 36 minutes ago Up 36 minutes fluentd 2025-02-10 09:57:05.202693 | orchestrator | 2025-02-10 09:57:07.264364 | orchestrator | ## Images @ testbed-node-0 2025-02-10 09:57:07.264500 | orchestrator | 2025-02-10 09:57:07.264519 | orchestrator | + echo 2025-02-10 09:57:07.264533 | orchestrator | + echo '## Images @ testbed-node-0' 2025-02-10 09:57:07.264549 | orchestrator | + echo 2025-02-10 09:57:07.264563 | orchestrator | + osism container testbed-node-0 images 2025-02-10 09:57:07.264597 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-02-10 09:57:07.264614 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/opensearch 2.18.0.20241206 6c4e6fb389ea 2 months ago 1.46GB 2025-02-10 09:57:07.264629 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/opensearch-dashboards 2.18.0.20241206 27de28d5430e 2 months ago 1.42GB 2025-02-10 09:57:07.264643 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/cron 3.0.20241206 fa0403ca8610 2 months ago 249MB 2025-02-10 09:57:07.264657 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/fluentd 5.0.5.20241206 e3fd4f4e5a0d 2 months ago 520MB 2025-02-10 09:57:07.264670 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/haproxy 2.4.24.20241206 49e9d26d5deb 2 months ago 256MB 2025-02-10 09:57:07.264690 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/grafana 11.4.0.20241206 c8680ff56657 2 months ago 760MB 2025-02-10 09:57:07.264713 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/keepalived 2.2.4.20241206 75fa76648b09 2 months ago 260MB 2025-02-10 09:57:07.264737 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/memcached 1.6.14.20241206 cefdd26e7841 2 months ago 250MB 2025-02-10 09:57:07.264790 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox 18.3.0.20241206 dc41193bf46f 2 months ago 623MB 2025-02-10 09:57:07.264813 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/mariadb-server 10.11.10.20241206 bd3deb912c99 2 months ago 435MB 2025-02-10 09:57:07.264834 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/rabbitmq 3.13.7.20241206 21f621c37859 2 months ago 306MB 2025-02-10 09:57:07.264857 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/mariadb-clustercheck 10.11.10.20241206 e9d3c4739314 2 months ago 282MB 2025-02-10 09:57:07.264879 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter 1.7.0.20241206 4143dd2fa615 2 months ago 274MB 2025-02-10 09:57:07.264903 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor 0.49.1.20241206 c20fa7440200 2 months ago 343MB 2025-02-10 09:57:07.264928 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter 1.7.0.20241206 330df749af4b 2 months ago 288MB 2025-02-10 09:57:07.264953 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter 0.15.1.20241206 e0c60a3989a2 2 months ago 280MB 2025-02-10 09:57:07.264992 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter 0.14.2.20241206 7773d2b6cd45 2 months ago 278MB 2025-02-10 09:57:07.265020 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/ironic-inspector 12.1.1.20241206 d7c1cac167d7 2 months ago 921MB 2025-02-10 09:57:07.265068 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/horizon 24.0.1.20241206 5c1f402d9b70 2 months ago 1.05GB 2025-02-10 09:57:07.265093 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-vswitchd 3.3.0.20241206 d1215d779ea3 2 months ago 265MB 2025-02-10 09:57:07.265117 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/redis-sentinel 6.0.16.20241206 3af600211be1 2 months ago 254MB 2025-02-10 09:57:07.265142 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/redis 6.0.16.20241206 4318b679d5fe 2 months ago 254MB 2025-02-10 09:57:07.265160 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/barbican-api 18.0.1.20241206 82e31f05da66 2 months ago 897MB 2025-02-10 09:57:07.265176 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-db-server 3.3.0.20241206 48764adcaea2 2 months ago 265MB 2025-02-10 09:57:07.265192 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/barbican-worker 18.0.1.20241206 238e19c784a2 2 months ago 898MB 2025-02-10 09:57:07.265208 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/barbican-keystone-listener 18.0.1.20241206 f407f4b04c09 2 months ago 898MB 2025-02-10 09:57:07.265224 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/placement-api 11.0.0.20241206 d96ed8b2c79e 2 months ago 883MB 2025-02-10 09:57:07.265241 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/neutron-server 24.0.2.20241206 43980b38b0ed 2 months ago 1.05GB 2025-02-10 09:57:07.265256 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent 24.0.2.20241206 84c326094fe2 2 months ago 1.04GB 2025-02-10 09:57:07.265286 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/glance-api 28.1.1.20241206 4b14125cb067 2 months ago 984MB 2025-02-10 09:57:07.265300 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/ceilometer-central 22.0.0.20241206 46ad04dcb9cc 2 months ago 884MB 2025-02-10 09:57:07.265314 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/ceilometer-notification 22.0.0.20241206 1e5474bb84d0 2 months ago 884MB 2025-02-10 09:57:07.265328 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/ironic-api 24.1.4.20241206 6feffb4e5928 2 months ago 962MB 2025-02-10 09:57:07.265342 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe 24.1.4.20241206 e4fbcf866faa 2 months ago 1.02GB 2025-02-10 09:57:07.265356 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/ironic-conductor 24.1.4.20241206 b1e879303728 2 months ago 1.21GB 2025-02-10 09:57:07.265369 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy 29.2.1.20241206 b28acfc2c482 2 months ago 1.2GB 2025-02-10 09:57:07.265383 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/nova-api 29.2.1.20241206 0aba257846a7 2 months ago 1.1GB 2025-02-10 09:57:07.265397 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/nova-scheduler 29.2.1.20241206 5f254a0b9978 2 months ago 1.1GB 2025-02-10 09:57:07.265415 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor 29.2.1.20241206 6ff8c71e89fb 2 months ago 1.1GB 2025-02-10 09:57:07.265429 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic 29.2.1.20241206 c99c6cec1f66 2 months ago 1.11GB 2025-02-10 09:57:07.265444 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/skyline-console 4.0.2.20241206 9c70bb46f9f8 2 months ago 964MB 2025-02-10 09:57:07.265457 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/skyline-apiserver 4.0.2.20241206 a550ac917753 2 months ago 943MB 2025-02-10 09:57:07.265480 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/magnum-api 18.0.1.20241206 f2f3b84fdc7f 2 months ago 1.01GB 2025-02-10 09:57:07.265495 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/magnum-conductor 18.0.1.20241206 84101c7a637d 2 months ago 1.01GB 2025-02-10 09:57:07.265509 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/octavia-health-manager 14.0.1.20241206 7b79d4aa02e2 2 months ago 929MB 2025-02-10 09:57:07.265523 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/octavia-housekeeping 14.0.1.20241206 e800cf9a8411 2 months ago 929MB 2025-02-10 09:57:07.265537 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/octavia-worker 14.0.1.20241206 bb00a0af1e9f 2 months ago 929MB 2025-02-10 09:57:07.265551 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/octavia-driver-agent 14.0.1.20241206 d09db2d035d8 2 months ago 949MB 2025-02-10 09:57:07.265565 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/octavia-api 14.0.1.20241206 f607550a23ed 2 months ago 949MB 2025-02-10 09:57:07.265580 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/aodh-notifier 18.0.1.20241206 b68406f6e4e5 2 months ago 881MB 2025-02-10 09:57:07.265594 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/aodh-listener 18.0.1.20241206 9c7611e5399b 2 months ago 881MB 2025-02-10 09:57:07.265607 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/aodh-evaluator 18.0.1.20241206 647f01e57ef9 2 months ago 881MB 2025-02-10 09:57:07.265621 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/aodh-api 18.0.1.20241206 7b92476449f7 2 months ago 881MB 2025-02-10 09:57:07.265635 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/designate-central 18.0.1.20241206 2b70377db635 2 months ago 890MB 2025-02-10 09:57:07.265649 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns 18.0.1.20241206 22f27dd7d227 2 months ago 891MB 2025-02-10 09:57:07.265663 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9 18.0.1.20241206 3532ad37b480 2 months ago 895MB 2025-02-10 09:57:07.265677 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/designate-api 18.0.1.20241206 e2e42737d129 2 months ago 891MB 2025-02-10 09:57:07.265690 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/designate-producer 18.0.1.20241206 56c812de20ff 2 months ago 891MB 2025-02-10 09:57:07.265704 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/designate-worker 18.0.1.20241206 5635177aaefd 2 months ago 895MB 2025-02-10 09:57:07.265724 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/heat-engine 22.0.2.20241206 ecf34656f492 2 months ago 963MB 2025-02-10 09:57:07.533315 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/heat-api 22.0.2.20241206 a4992602d31e 2 months ago 962MB 2025-02-10 09:57:07.533445 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/heat-api-cfn 22.0.2.20241206 d7252d037761 2 months ago 962MB 2025-02-10 09:57:07.533477 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/cinder-api 24.2.1.20241206 8352396eef36 2 months ago 1.28GB 2025-02-10 09:57:07.533493 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler 24.2.1.20241206 ba995bca2c49 2 months ago 1.28GB 2025-02-10 09:57:07.533507 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/keystone-ssh 25.0.1.20241206 ed6287440b5f 2 months ago 936MB 2025-02-10 09:57:07.533522 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/keystone-fernet 25.0.1.20241206 cd74d4d03077 2 months ago 933MB 2025-02-10 09:57:07.533536 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/keystone 25.0.1.20241206 ee19c8288c92 2 months ago 957MB 2025-02-10 09:57:07.533578 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/ovn-nb-db-server 24.3.4.20241206 26064100e1ea 2 months ago 776MB 2025-02-10 09:57:07.533596 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/ovn-northd 24.3.4.20241206 bed78408ad68 2 months ago 777MB 2025-02-10 09:57:07.533610 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/ovn-controller 24.3.4.20241206 993d26b8baff 2 months ago 777MB 2025-02-10 09:57:07.533624 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/ovn-sb-db-server 24.3.4.20241206 c662c8967402 2 months ago 776MB 2025-02-10 09:57:07.533639 | orchestrator | nexus.testbed.osism.xyz:8193/osism/ceph-daemon 17.2.7 d8a5de0d58c4 10 months ago 1.38GB 2025-02-10 09:57:07.533671 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-02-10 09:57:07.590209 | orchestrator | ++ semver 8.1.0 5.0.0 2025-02-10 09:57:07.590349 | orchestrator | 2025-02-10 09:57:09.700116 | orchestrator | ## Containers @ testbed-node-1 2025-02-10 09:57:09.700261 | orchestrator | 2025-02-10 09:57:09.700283 | orchestrator | + [[ 1 -eq -1 ]] 2025-02-10 09:57:09.700299 | orchestrator | + echo 2025-02-10 09:57:09.700315 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-02-10 09:57:09.700331 | orchestrator | + echo 2025-02-10 09:57:09.700346 | orchestrator | + osism container testbed-node-1 ps 2025-02-10 09:57:09.700381 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-02-10 09:57:09.700399 | orchestrator | 8034997683a4 nexus.testbed.osism.xyz:8193/kolla/release/octavia-worker:14.0.1.20241206 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-02-10 09:57:09.700415 | orchestrator | f15dae491cf0 nexus.testbed.osism.xyz:8193/kolla/release/octavia-housekeeping:14.0.1.20241206 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-02-10 09:57:09.700432 | orchestrator | 78d6c9931e8f nexus.testbed.osism.xyz:8193/kolla/release/octavia-health-manager:14.0.1.20241206 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-02-10 09:57:09.700449 | orchestrator | eb4e7ff94855 nexus.testbed.osism.xyz:8193/kolla/release/octavia-driver-agent:14.0.1.20241206 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2025-02-10 09:57:09.700464 | orchestrator | c7feb311db47 nexus.testbed.osism.xyz:8193/kolla/release/octavia-api:14.0.1.20241206 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-02-10 09:57:09.700478 | orchestrator | 0d85b4c21ec4 nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) nova_compute_ironic 2025-02-10 09:57:09.700493 | orchestrator | 64f5f726c592 nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-02-10 09:57:09.700508 | orchestrator | a77918fdc776 nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-02-10 09:57:09.700522 | orchestrator | 46d2830757ab nexus.testbed.osism.xyz:8193/kolla/release/grafana:11.4.0.20241206 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-02-10 09:57:09.700537 | orchestrator | 9c932620eeed nexus.testbed.osism.xyz:8193/kolla/release/glance-api:28.1.1.20241206 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) glance_api 2025-02-10 09:57:09.700552 | orchestrator | 551f83db2bf9 nexus.testbed.osism.xyz:8193/kolla/release/nova-api:29.2.1.20241206 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_api 2025-02-10 09:57:09.700600 | orchestrator | 58c17094a8eb nexus.testbed.osism.xyz:8193/kolla/release/nova-scheduler:29.2.1.20241206 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-02-10 09:57:09.700616 | orchestrator | 22c4417c23dd nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_elasticsearch_exporter 2025-02-10 09:57:09.700632 | orchestrator | 9f641a586eec nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-02-10 09:57:09.700647 | orchestrator | d6162bac709d nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_cadvisor 2025-02-10 09:57:09.700663 | orchestrator | 02edcfbf98c4 nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-02-10 09:57:09.700678 | orchestrator | 98b6ad785902 nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter:0.14.2.20241206 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_memcached_exporter 2025-02-10 09:57:09.700693 | orchestrator | 6995817c5e38 nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_mysqld_exporter 2025-02-10 09:57:09.700720 | orchestrator | e6ababca141b nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2025-02-10 09:57:09.700736 | orchestrator | cab7cee22848 nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) ironic_neutron_agent 2025-02-10 09:57:09.700776 | orchestrator | a18871f4ee2f nexus.testbed.osism.xyz:8193/kolla/release/magnum-conductor:18.0.1.20241206 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_conductor 2025-02-10 09:57:09.700792 | orchestrator | b71d7d922171 nexus.testbed.osism.xyz:8193/kolla/release/magnum-api:18.0.1.20241206 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_api 2025-02-10 09:57:09.700808 | orchestrator | cfc747fe6288 nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) neutron_server 2025-02-10 09:57:09.700823 | orchestrator | 832447f5584f nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) ironic_http 2025-02-10 09:57:09.700838 | orchestrator | 130d946d3257 nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206 "dumb-init --single-…" 15 minutes ago Up 15 minutes ironic_tftp 2025-02-10 09:57:09.700853 | orchestrator | 21b0502e907b nexus.testbed.osism.xyz:8193/kolla/release/ironic-inspector:12.1.1.20241206 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) ironic_inspector 2025-02-10 09:57:09.700881 | orchestrator | 8b0efe56e936 nexus.testbed.osism.xyz:8193/kolla/release/ironic-api:24.1.4.20241206 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) ironic_api 2025-02-10 09:57:09.700896 | orchestrator | 2bbd715cf820 nexus.testbed.osism.xyz:8193/kolla/release/ironic-conductor:24.1.4.20241206 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) ironic_conductor 2025-02-10 09:57:09.700920 | orchestrator | 0219214a4894 nexus.testbed.osism.xyz:8193/kolla/release/placement-api:11.0.0.20241206 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) placement_api 2025-02-10 09:57:09.700935 | orchestrator | 57a067172401 nexus.testbed.osism.xyz:8193/kolla/release/designate-worker:18.0.1.20241206 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_worker 2025-02-10 09:57:09.700951 | orchestrator | 3734cb425ee5 nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns:18.0.1.20241206 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_mdns 2025-02-10 09:57:09.700966 | orchestrator | 5e555487c13d nexus.testbed.osism.xyz:8193/kolla/release/designate-producer:18.0.1.20241206 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_producer 2025-02-10 09:57:09.700981 | orchestrator | 0d8c3084ffa4 nexus.testbed.osism.xyz:8193/kolla/release/designate-central:18.0.1.20241206 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_central 2025-02-10 09:57:09.700996 | orchestrator | acd5b9ae5c70 nexus.testbed.osism.xyz:8193/kolla/release/designate-api:18.0.1.20241206 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_api 2025-02-10 09:57:09.701012 | orchestrator | 92201df971eb nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9:18.0.1.20241206 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) designate_backend_bind9 2025-02-10 09:57:09.701027 | orchestrator | c96c8a91df2c nexus.testbed.osism.xyz:8193/kolla/release/barbican-worker:18.0.1.20241206 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) barbican_worker 2025-02-10 09:57:09.701046 | orchestrator | f13cd08a5a5b nexus.testbed.osism.xyz:8193/kolla/release/barbican-keystone-listener:18.0.1.20241206 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) barbican_keystone_listener 2025-02-10 09:57:09.701060 | orchestrator | 57224281e26d nexus.testbed.osism.xyz:8193/kolla/release/barbican-api:18.0.1.20241206 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) barbican_api 2025-02-10 09:57:09.701083 | orchestrator | c7db1bed8461 nexus.testbed.osism.xyz:8193/osism/ceph-daemon:17.2.7 "/opt/ceph-container…" 19 minutes ago Up 19 minutes ceph-mgr-testbed-node-1 2025-02-10 09:57:09.702224 | orchestrator | db56c5b1db2e nexus.testbed.osism.xyz:8193/kolla/release/keystone:25.0.1.20241206 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) keystone 2025-02-10 09:57:09.702264 | orchestrator | 86c7e7a55200 nexus.testbed.osism.xyz:8193/kolla/release/keystone-fernet:25.0.1.20241206 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) keystone_fernet 2025-02-10 09:57:09.702278 | orchestrator | 436ac0932d9a nexus.testbed.osism.xyz:8193/kolla/release/keystone-ssh:25.0.1.20241206 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) keystone_ssh 2025-02-10 09:57:09.702293 | orchestrator | 9b84aec38a6c nexus.testbed.osism.xyz:8193/kolla/release/horizon:24.0.1.20241206 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) horizon 2025-02-10 09:57:09.702307 | orchestrator | 76995e9e8327 nexus.testbed.osism.xyz:8193/kolla/release/mariadb-server:10.11.10.20241206 "dumb-init -- kolla_…" 25 minutes ago Up 25 minutes (healthy) mariadb 2025-02-10 09:57:09.702321 | orchestrator | c7643fb92fbd nexus.testbed.osism.xyz:8193/kolla/release/mariadb-clustercheck:10.11.10.20241206 "dumb-init --single-…" 25 minutes ago Up 25 minutes mariadb_clustercheck 2025-02-10 09:57:09.702336 | orchestrator | b728af7d41e0 nexus.testbed.osism.xyz:8193/kolla/release/opensearch-dashboards:2.18.0.20241206 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) opensearch_dashboards 2025-02-10 09:57:09.702363 | orchestrator | 2717b3f1c9b8 nexus.testbed.osism.xyz:8193/kolla/release/opensearch:2.18.0.20241206 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) opensearch 2025-02-10 09:57:09.702378 | orchestrator | b56f3855c4f8 nexus.testbed.osism.xyz:8193/osism/ceph-daemon:17.2.7 "/usr/bin/ceph-crash" 26 minutes ago Up 26 minutes ceph-crash-testbed-node-1 2025-02-10 09:57:09.702391 | orchestrator | 07f82dbf6c99 nexus.testbed.osism.xyz:8193/kolla/release/keepalived:2.2.4.20241206 "dumb-init --single-…" 27 minutes ago Up 27 minutes keepalived 2025-02-10 09:57:09.702411 | orchestrator | 30eb351f462a nexus.testbed.osism.xyz:8193/kolla/release/haproxy:2.4.24.20241206 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) haproxy 2025-02-10 09:57:09.702425 | orchestrator | 73df5e16ccb7 nexus.testbed.osism.xyz:8193/kolla/release/ovn-northd:24.3.4.20241206 "dumb-init --single-…" 32 minutes ago Up 31 minutes ovn_northd 2025-02-10 09:57:09.702439 | orchestrator | 1ae8e207d9a2 nexus.testbed.osism.xyz:8193/kolla/release/ovn-sb-db-server:24.3.4.20241206 "dumb-init --single-…" 32 minutes ago Up 32 minutes ovn_sb_db 2025-02-10 09:57:09.702453 | orchestrator | 6d8ee807a238 nexus.testbed.osism.xyz:8193/kolla/release/ovn-nb-db-server:24.3.4.20241206 "dumb-init --single-…" 32 minutes ago Up 32 minutes ovn_nb_db 2025-02-10 09:57:09.702467 | orchestrator | 547f5fa1e985 nexus.testbed.osism.xyz:8193/osism/ceph-daemon:17.2.7 "/opt/ceph-container…" 32 minutes ago Up 32 minutes ceph-mon-testbed-node-1 2025-02-10 09:57:09.702481 | orchestrator | da59c0203024 nexus.testbed.osism.xyz:8193/kolla/release/ovn-controller:24.3.4.20241206 "dumb-init --single-…" 33 minutes ago Up 33 minutes ovn_controller 2025-02-10 09:57:09.702494 | orchestrator | 2f855a60fa64 nexus.testbed.osism.xyz:8193/kolla/release/rabbitmq:3.13.7.20241206 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) rabbitmq 2025-02-10 09:57:09.702516 | orchestrator | 821d91732880 nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-vswitchd:3.3.0.20241206 "dumb-init --single-…" 35 minutes ago Up 34 minutes (healthy) openvswitch_vswitchd 2025-02-10 09:57:09.991911 | orchestrator | 795fe5b8df7b nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-db-server:3.3.0.20241206 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) openvswitch_db 2025-02-10 09:57:09.992024 | orchestrator | d0c0d613572b nexus.testbed.osism.xyz:8193/kolla/release/redis-sentinel:6.0.16.20241206 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) redis_sentinel 2025-02-10 09:57:09.992042 | orchestrator | 6d8aed744d37 nexus.testbed.osism.xyz:8193/kolla/release/redis:6.0.16.20241206 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) redis 2025-02-10 09:57:09.992057 | orchestrator | a01a6a6e1be4 nexus.testbed.osism.xyz:8193/kolla/release/memcached:1.6.14.20241206 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) memcached 2025-02-10 09:57:09.992071 | orchestrator | a9cb870e1734 nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206 "dumb-init --single-…" 36 minutes ago Up 36 minutes cron 2025-02-10 09:57:09.992086 | orchestrator | f0fa1c7b2d5f nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206 "dumb-init --single-…" 36 minutes ago Up 36 minutes kolla_toolbox 2025-02-10 09:57:09.992100 | orchestrator | f13eb5ff15df nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206 "dumb-init --single-…" 36 minutes ago Up 36 minutes fluentd 2025-02-10 09:57:09.992157 | orchestrator | 2025-02-10 09:57:11.966289 | orchestrator | ## Images @ testbed-node-1 2025-02-10 09:57:11.966430 | orchestrator | 2025-02-10 09:57:11.966452 | orchestrator | + echo 2025-02-10 09:57:11.966468 | orchestrator | + echo '## Images @ testbed-node-1' 2025-02-10 09:57:11.966483 | orchestrator | + echo 2025-02-10 09:57:11.966497 | orchestrator | + osism container testbed-node-1 images 2025-02-10 09:57:11.966530 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-02-10 09:57:11.966547 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/opensearch 2.18.0.20241206 6c4e6fb389ea 2 months ago 1.46GB 2025-02-10 09:57:11.966561 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/opensearch-dashboards 2.18.0.20241206 27de28d5430e 2 months ago 1.42GB 2025-02-10 09:57:11.966576 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/cron 3.0.20241206 fa0403ca8610 2 months ago 249MB 2025-02-10 09:57:11.966590 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/fluentd 5.0.5.20241206 e3fd4f4e5a0d 2 months ago 520MB 2025-02-10 09:57:11.966604 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/haproxy 2.4.24.20241206 49e9d26d5deb 2 months ago 256MB 2025-02-10 09:57:11.966618 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/grafana 11.4.0.20241206 c8680ff56657 2 months ago 760MB 2025-02-10 09:57:11.966633 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/keepalived 2.2.4.20241206 75fa76648b09 2 months ago 260MB 2025-02-10 09:57:11.966647 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/memcached 1.6.14.20241206 cefdd26e7841 2 months ago 250MB 2025-02-10 09:57:11.966661 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox 18.3.0.20241206 dc41193bf46f 2 months ago 623MB 2025-02-10 09:57:11.966674 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/mariadb-server 10.11.10.20241206 bd3deb912c99 2 months ago 435MB 2025-02-10 09:57:11.966688 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/rabbitmq 3.13.7.20241206 21f621c37859 2 months ago 306MB 2025-02-10 09:57:11.966702 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/mariadb-clustercheck 10.11.10.20241206 e9d3c4739314 2 months ago 282MB 2025-02-10 09:57:11.966727 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter 1.7.0.20241206 4143dd2fa615 2 months ago 274MB 2025-02-10 09:57:11.966768 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter 1.7.0.20241206 330df749af4b 2 months ago 288MB 2025-02-10 09:57:11.966799 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor 0.49.1.20241206 c20fa7440200 2 months ago 343MB 2025-02-10 09:57:11.966824 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter 0.15.1.20241206 e0c60a3989a2 2 months ago 280MB 2025-02-10 09:57:11.966850 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter 0.14.2.20241206 7773d2b6cd45 2 months ago 278MB 2025-02-10 09:57:11.966870 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/ironic-inspector 12.1.1.20241206 d7c1cac167d7 2 months ago 921MB 2025-02-10 09:57:11.966885 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/horizon 24.0.1.20241206 5c1f402d9b70 2 months ago 1.05GB 2025-02-10 09:57:11.966902 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-vswitchd 3.3.0.20241206 d1215d779ea3 2 months ago 265MB 2025-02-10 09:57:11.966917 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/redis 6.0.16.20241206 4318b679d5fe 2 months ago 254MB 2025-02-10 09:57:11.966961 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/redis-sentinel 6.0.16.20241206 3af600211be1 2 months ago 254MB 2025-02-10 09:57:11.966977 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-db-server 3.3.0.20241206 48764adcaea2 2 months ago 265MB 2025-02-10 09:57:11.966992 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/barbican-api 18.0.1.20241206 82e31f05da66 2 months ago 897MB 2025-02-10 09:57:11.967008 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/barbican-worker 18.0.1.20241206 238e19c784a2 2 months ago 898MB 2025-02-10 09:57:11.967023 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/barbican-keystone-listener 18.0.1.20241206 f407f4b04c09 2 months ago 898MB 2025-02-10 09:57:11.967038 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/placement-api 11.0.0.20241206 d96ed8b2c79e 2 months ago 883MB 2025-02-10 09:57:11.967053 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/neutron-server 24.0.2.20241206 43980b38b0ed 2 months ago 1.05GB 2025-02-10 09:57:11.967069 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent 24.0.2.20241206 84c326094fe2 2 months ago 1.04GB 2025-02-10 09:57:11.967097 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/glance-api 28.1.1.20241206 4b14125cb067 2 months ago 984MB 2025-02-10 09:57:11.967112 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/ironic-api 24.1.4.20241206 6feffb4e5928 2 months ago 962MB 2025-02-10 09:57:11.967126 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/ironic-conductor 24.1.4.20241206 b1e879303728 2 months ago 1.21GB 2025-02-10 09:57:11.967140 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe 24.1.4.20241206 e4fbcf866faa 2 months ago 1.02GB 2025-02-10 09:57:11.967153 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy 29.2.1.20241206 b28acfc2c482 2 months ago 1.2GB 2025-02-10 09:57:11.967167 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/nova-api 29.2.1.20241206 0aba257846a7 2 months ago 1.1GB 2025-02-10 09:57:11.967186 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/nova-scheduler 29.2.1.20241206 5f254a0b9978 2 months ago 1.1GB 2025-02-10 09:57:11.967200 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor 29.2.1.20241206 6ff8c71e89fb 2 months ago 1.1GB 2025-02-10 09:57:11.967214 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic 29.2.1.20241206 c99c6cec1f66 2 months ago 1.11GB 2025-02-10 09:57:11.967227 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/magnum-api 18.0.1.20241206 f2f3b84fdc7f 2 months ago 1.01GB 2025-02-10 09:57:11.967241 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/magnum-conductor 18.0.1.20241206 84101c7a637d 2 months ago 1.01GB 2025-02-10 09:57:11.967254 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/octavia-health-manager 14.0.1.20241206 7b79d4aa02e2 2 months ago 929MB 2025-02-10 09:57:11.967267 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/octavia-housekeeping 14.0.1.20241206 e800cf9a8411 2 months ago 929MB 2025-02-10 09:57:11.967281 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/octavia-worker 14.0.1.20241206 bb00a0af1e9f 2 months ago 929MB 2025-02-10 09:57:11.967295 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/octavia-driver-agent 14.0.1.20241206 d09db2d035d8 2 months ago 949MB 2025-02-10 09:57:11.967309 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/octavia-api 14.0.1.20241206 f607550a23ed 2 months ago 949MB 2025-02-10 09:57:11.967323 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/designate-central 18.0.1.20241206 2b70377db635 2 months ago 890MB 2025-02-10 09:57:11.967344 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns 18.0.1.20241206 22f27dd7d227 2 months ago 891MB 2025-02-10 09:57:11.967358 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9 18.0.1.20241206 3532ad37b480 2 months ago 895MB 2025-02-10 09:57:11.967371 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/designate-api 18.0.1.20241206 e2e42737d129 2 months ago 891MB 2025-02-10 09:57:11.967385 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/designate-producer 18.0.1.20241206 56c812de20ff 2 months ago 891MB 2025-02-10 09:57:11.967399 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/designate-worker 18.0.1.20241206 5635177aaefd 2 months ago 895MB 2025-02-10 09:57:11.967412 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/cinder-api 24.2.1.20241206 8352396eef36 2 months ago 1.28GB 2025-02-10 09:57:11.967425 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler 24.2.1.20241206 ba995bca2c49 2 months ago 1.28GB 2025-02-10 09:57:11.967439 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/keystone-ssh 25.0.1.20241206 ed6287440b5f 2 months ago 936MB 2025-02-10 09:57:11.967452 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/keystone-fernet 25.0.1.20241206 cd74d4d03077 2 months ago 933MB 2025-02-10 09:57:11.967466 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/keystone 25.0.1.20241206 ee19c8288c92 2 months ago 957MB 2025-02-10 09:57:11.967479 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/ovn-northd 24.3.4.20241206 bed78408ad68 2 months ago 777MB 2025-02-10 09:57:11.967496 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/ovn-nb-db-server 24.3.4.20241206 26064100e1ea 2 months ago 776MB 2025-02-10 09:57:11.967511 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/ovn-controller 24.3.4.20241206 993d26b8baff 2 months ago 777MB 2025-02-10 09:57:11.967530 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/ovn-sb-db-server 24.3.4.20241206 c662c8967402 2 months ago 776MB 2025-02-10 09:57:12.520331 | orchestrator | nexus.testbed.osism.xyz:8193/osism/ceph-daemon 17.2.7 d8a5de0d58c4 10 months ago 1.38GB 2025-02-10 09:57:12.520489 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-02-10 09:57:12.520880 | orchestrator | ++ semver 8.1.0 5.0.0 2025-02-10 09:57:12.595973 | orchestrator | 2025-02-10 09:57:14.695932 | orchestrator | ## Containers @ testbed-node-2 2025-02-10 09:57:14.696096 | orchestrator | 2025-02-10 09:57:14.696129 | orchestrator | + [[ 1 -eq -1 ]] 2025-02-10 09:57:14.696155 | orchestrator | + echo 2025-02-10 09:57:14.696177 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-02-10 09:57:14.696194 | orchestrator | + echo 2025-02-10 09:57:14.696208 | orchestrator | + osism container testbed-node-2 ps 2025-02-10 09:57:14.696245 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-02-10 09:57:14.696262 | orchestrator | 42b1fe8efdc7 nexus.testbed.osism.xyz:8193/kolla/release/octavia-worker:14.0.1.20241206 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-02-10 09:57:14.696278 | orchestrator | b39cd703badf nexus.testbed.osism.xyz:8193/kolla/release/octavia-housekeeping:14.0.1.20241206 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-02-10 09:57:14.696292 | orchestrator | 50a88d828cf2 nexus.testbed.osism.xyz:8193/kolla/release/octavia-health-manager:14.0.1.20241206 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_health_manager 2025-02-10 09:57:14.696309 | orchestrator | 0fc91ee6c1a2 nexus.testbed.osism.xyz:8193/kolla/release/octavia-driver-agent:14.0.1.20241206 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2025-02-10 09:57:14.696350 | orchestrator | 73c20826d897 nexus.testbed.osism.xyz:8193/kolla/release/octavia-api:14.0.1.20241206 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-02-10 09:57:14.696367 | orchestrator | a3e46661b728 nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic:29.2.1.20241206 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) nova_compute_ironic 2025-02-10 09:57:14.696383 | orchestrator | 6d59b2e9fec8 nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy:29.2.1.20241206 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-02-10 09:57:14.696413 | orchestrator | 428d0e82a662 nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor:29.2.1.20241206 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-02-10 09:57:14.696429 | orchestrator | f4accdd098e4 nexus.testbed.osism.xyz:8193/kolla/release/grafana:11.4.0.20241206 "dumb-init --single-…" 8 minutes ago Up 7 minutes grafana 2025-02-10 09:57:14.696445 | orchestrator | da5b00ef450b nexus.testbed.osism.xyz:8193/kolla/release/glance-api:28.1.1.20241206 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) glance_api 2025-02-10 09:57:14.696461 | orchestrator | 7ee3dce39ae2 nexus.testbed.osism.xyz:8193/kolla/release/nova-api:29.2.1.20241206 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_api 2025-02-10 09:57:14.696477 | orchestrator | e0e503328347 nexus.testbed.osism.xyz:8193/kolla/release/nova-scheduler:29.2.1.20241206 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-02-10 09:57:14.696492 | orchestrator | 739bc76ec4b4 nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_elasticsearch_exporter 2025-02-10 09:57:14.696509 | orchestrator | bb1de4e907e3 nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler:24.2.1.20241206 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-02-10 09:57:14.696524 | orchestrator | fb94dde7fbde nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_cadvisor 2025-02-10 09:57:14.696540 | orchestrator | 1c47a5f1371b nexus.testbed.osism.xyz:8193/kolla/release/cinder-api:24.2.1.20241206 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-02-10 09:57:14.696557 | orchestrator | 09af23861e3a nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter:0.14.2.20241206 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_memcached_exporter 2025-02-10 09:57:14.696573 | orchestrator | 02bfe2487488 nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_mysqld_exporter 2025-02-10 09:57:14.696601 | orchestrator | 960f2513a817 nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2025-02-10 09:57:14.696668 | orchestrator | 6aac07b0a548 nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent:24.0.2.20241206 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) ironic_neutron_agent 2025-02-10 09:57:14.696696 | orchestrator | 02b6f69adb1f nexus.testbed.osism.xyz:8193/kolla/release/magnum-conductor:18.0.1.20241206 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_conductor 2025-02-10 09:57:14.696732 | orchestrator | a69e565c16ef nexus.testbed.osism.xyz:8193/kolla/release/magnum-api:18.0.1.20241206 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_api 2025-02-10 09:57:14.696784 | orchestrator | 38a096864d34 nexus.testbed.osism.xyz:8193/kolla/release/neutron-server:24.0.2.20241206 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) neutron_server 2025-02-10 09:57:14.696809 | orchestrator | cb253b1d776a nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) ironic_http 2025-02-10 09:57:14.696832 | orchestrator | 4821a257ddae nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe:24.1.4.20241206 "dumb-init --single-…" 15 minutes ago Up 15 minutes ironic_tftp 2025-02-10 09:57:14.696856 | orchestrator | ab2c4fed3310 nexus.testbed.osism.xyz:8193/kolla/release/ironic-inspector:12.1.1.20241206 "dumb-init --single-…" 16 minutes ago Up 15 minutes (healthy) ironic_inspector 2025-02-10 09:57:14.696881 | orchestrator | c9d33769a8ab nexus.testbed.osism.xyz:8193/kolla/release/ironic-api:24.1.4.20241206 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) ironic_api 2025-02-10 09:57:14.696904 | orchestrator | a24399626b16 nexus.testbed.osism.xyz:8193/kolla/release/ironic-conductor:24.1.4.20241206 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) ironic_conductor 2025-02-10 09:57:14.696928 | orchestrator | 027250f9ca48 nexus.testbed.osism.xyz:8193/kolla/release/placement-api:11.0.0.20241206 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) placement_api 2025-02-10 09:57:14.696953 | orchestrator | 15a659820cea nexus.testbed.osism.xyz:8193/kolla/release/designate-worker:18.0.1.20241206 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_worker 2025-02-10 09:57:14.696982 | orchestrator | 98596a6ff7d3 nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns:18.0.1.20241206 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_mdns 2025-02-10 09:57:14.696998 | orchestrator | a0005bd2ec8c nexus.testbed.osism.xyz:8193/kolla/release/designate-producer:18.0.1.20241206 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_producer 2025-02-10 09:57:14.697012 | orchestrator | b681757d12ef nexus.testbed.osism.xyz:8193/kolla/release/designate-central:18.0.1.20241206 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_central 2025-02-10 09:57:14.697026 | orchestrator | c98e366ee5e6 nexus.testbed.osism.xyz:8193/kolla/release/designate-api:18.0.1.20241206 "dumb-init --single-…" 18 minutes ago Up 17 minutes (healthy) designate_api 2025-02-10 09:57:14.697040 | orchestrator | ed607c136f49 nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9:18.0.1.20241206 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) designate_backend_bind9 2025-02-10 09:57:14.697055 | orchestrator | 8ff347b8095f nexus.testbed.osism.xyz:8193/kolla/release/barbican-worker:18.0.1.20241206 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) barbican_worker 2025-02-10 09:57:14.697069 | orchestrator | 563d0a9a1351 nexus.testbed.osism.xyz:8193/kolla/release/barbican-keystone-listener:18.0.1.20241206 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) barbican_keystone_listener 2025-02-10 09:57:14.697083 | orchestrator | 42aa4f2ac755 nexus.testbed.osism.xyz:8193/kolla/release/barbican-api:18.0.1.20241206 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) barbican_api 2025-02-10 09:57:14.697106 | orchestrator | 114a5b5c50e0 nexus.testbed.osism.xyz:8193/osism/ceph-daemon:17.2.7 "/opt/ceph-container…" 19 minutes ago Up 19 minutes ceph-mgr-testbed-node-2 2025-02-10 09:57:14.698306 | orchestrator | f48d6b7302e5 nexus.testbed.osism.xyz:8193/kolla/release/keystone:25.0.1.20241206 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) keystone 2025-02-10 09:57:14.699317 | orchestrator | c79770334d4e nexus.testbed.osism.xyz:8193/kolla/release/keystone-fernet:25.0.1.20241206 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) keystone_fernet 2025-02-10 09:57:14.699392 | orchestrator | be226277c041 nexus.testbed.osism.xyz:8193/kolla/release/horizon:24.0.1.20241206 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) horizon 2025-02-10 09:57:14.699410 | orchestrator | 74b61dcf8375 nexus.testbed.osism.xyz:8193/kolla/release/keystone-ssh:25.0.1.20241206 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) keystone_ssh 2025-02-10 09:57:14.699425 | orchestrator | a70780effd7d nexus.testbed.osism.xyz:8193/kolla/release/mariadb-server:10.11.10.20241206 "dumb-init -- kolla_…" 25 minutes ago Up 24 minutes (healthy) mariadb 2025-02-10 09:57:14.699440 | orchestrator | f3cac8a63d9b nexus.testbed.osism.xyz:8193/kolla/release/opensearch-dashboards:2.18.0.20241206 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) opensearch_dashboards 2025-02-10 09:57:14.699456 | orchestrator | 42c9195ab254 nexus.testbed.osism.xyz:8193/kolla/release/mariadb-clustercheck:10.11.10.20241206 "dumb-init --single-…" 26 minutes ago Up 25 minutes mariadb_clustercheck 2025-02-10 09:57:14.699473 | orchestrator | dcbe16f047f2 nexus.testbed.osism.xyz:8193/kolla/release/opensearch:2.18.0.20241206 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) opensearch 2025-02-10 09:57:14.699487 | orchestrator | 79e8cb83c37b nexus.testbed.osism.xyz:8193/osism/ceph-daemon:17.2.7 "/usr/bin/ceph-crash" 26 minutes ago Up 26 minutes ceph-crash-testbed-node-2 2025-02-10 09:57:14.699501 | orchestrator | 2eed021cb24e nexus.testbed.osism.xyz:8193/kolla/release/keepalived:2.2.4.20241206 "dumb-init --single-…" 28 minutes ago Up 28 minutes keepalived 2025-02-10 09:57:14.699516 | orchestrator | db1fcda46647 nexus.testbed.osism.xyz:8193/kolla/release/haproxy:2.4.24.20241206 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) haproxy 2025-02-10 09:57:14.699548 | orchestrator | d5aa104136e6 nexus.testbed.osism.xyz:8193/osism/ceph-daemon:17.2.7 "/opt/ceph-container…" 32 minutes ago Up 32 minutes ceph-mon-testbed-node-2 2025-02-10 09:57:14.930157 | orchestrator | badae27578a8 nexus.testbed.osism.xyz:8193/kolla/release/ovn-northd:24.3.4.20241206 "dumb-init --single-…" 32 minutes ago Up 31 minutes ovn_northd 2025-02-10 09:57:14.930323 | orchestrator | feacba57f819 nexus.testbed.osism.xyz:8193/kolla/release/ovn-sb-db-server:24.3.4.20241206 "dumb-init --single-…" 32 minutes ago Up 32 minutes ovn_sb_db 2025-02-10 09:57:14.930348 | orchestrator | bc1ecc1b5981 nexus.testbed.osism.xyz:8193/kolla/release/ovn-nb-db-server:24.3.4.20241206 "dumb-init --single-…" 32 minutes ago Up 32 minutes ovn_nb_db 2025-02-10 09:57:14.930364 | orchestrator | 016c35da7a29 nexus.testbed.osism.xyz:8193/kolla/release/rabbitmq:3.13.7.20241206 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) rabbitmq 2025-02-10 09:57:14.930379 | orchestrator | 918f1be020c8 nexus.testbed.osism.xyz:8193/kolla/release/ovn-controller:24.3.4.20241206 "dumb-init --single-…" 33 minutes ago Up 33 minutes ovn_controller 2025-02-10 09:57:14.930426 | orchestrator | 8324f2a41a0c nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-vswitchd:3.3.0.20241206 "dumb-init --single-…" 35 minutes ago Up 34 minutes (healthy) openvswitch_vswitchd 2025-02-10 09:57:14.930457 | orchestrator | 5f6b9fbc052e nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-db-server:3.3.0.20241206 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) openvswitch_db 2025-02-10 09:57:14.930472 | orchestrator | 3a78fcbcf2ff nexus.testbed.osism.xyz:8193/kolla/release/redis-sentinel:6.0.16.20241206 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) redis_sentinel 2025-02-10 09:57:14.930486 | orchestrator | 00e165145508 nexus.testbed.osism.xyz:8193/kolla/release/redis:6.0.16.20241206 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) redis 2025-02-10 09:57:14.930500 | orchestrator | 7f1edbe199fa nexus.testbed.osism.xyz:8193/kolla/release/memcached:1.6.14.20241206 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) memcached 2025-02-10 09:57:14.930514 | orchestrator | a807154fb04e nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206 "dumb-init --single-…" 36 minutes ago Up 36 minutes cron 2025-02-10 09:57:14.930528 | orchestrator | 56827d0e35d5 nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206 "dumb-init --single-…" 36 minutes ago Up 36 minutes kolla_toolbox 2025-02-10 09:57:14.930542 | orchestrator | ec757da33fde nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206 "dumb-init --single-…" 36 minutes ago Up 36 minutes fluentd 2025-02-10 09:57:14.930578 | orchestrator | 2025-02-10 09:57:16.990434 | orchestrator | ## Images @ testbed-node-2 2025-02-10 09:57:16.990574 | orchestrator | 2025-02-10 09:57:16.990595 | orchestrator | + echo 2025-02-10 09:57:16.990611 | orchestrator | + echo '## Images @ testbed-node-2' 2025-02-10 09:57:16.990627 | orchestrator | + echo 2025-02-10 09:57:16.990642 | orchestrator | + osism container testbed-node-2 images 2025-02-10 09:57:16.990680 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-02-10 09:57:16.990697 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/opensearch 2.18.0.20241206 6c4e6fb389ea 2 months ago 1.46GB 2025-02-10 09:57:16.990711 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/opensearch-dashboards 2.18.0.20241206 27de28d5430e 2 months ago 1.42GB 2025-02-10 09:57:16.990725 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/cron 3.0.20241206 fa0403ca8610 2 months ago 249MB 2025-02-10 09:57:16.990789 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/fluentd 5.0.5.20241206 e3fd4f4e5a0d 2 months ago 520MB 2025-02-10 09:57:16.990806 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/haproxy 2.4.24.20241206 49e9d26d5deb 2 months ago 256MB 2025-02-10 09:57:16.990820 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/keepalived 2.2.4.20241206 75fa76648b09 2 months ago 260MB 2025-02-10 09:57:16.990834 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/grafana 11.4.0.20241206 c8680ff56657 2 months ago 760MB 2025-02-10 09:57:16.990848 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox 18.3.0.20241206 dc41193bf46f 2 months ago 623MB 2025-02-10 09:57:16.990862 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/memcached 1.6.14.20241206 cefdd26e7841 2 months ago 250MB 2025-02-10 09:57:16.990875 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/rabbitmq 3.13.7.20241206 21f621c37859 2 months ago 306MB 2025-02-10 09:57:16.990890 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/mariadb-server 10.11.10.20241206 bd3deb912c99 2 months ago 435MB 2025-02-10 09:57:16.990937 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/mariadb-clustercheck 10.11.10.20241206 e9d3c4739314 2 months ago 282MB 2025-02-10 09:57:16.990953 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/prometheus-elasticsearch-exporter 1.7.0.20241206 4143dd2fa615 2 months ago 274MB 2025-02-10 09:57:16.990967 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor 0.49.1.20241206 c20fa7440200 2 months ago 343MB 2025-02-10 09:57:16.990981 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter 1.7.0.20241206 330df749af4b 2 months ago 288MB 2025-02-10 09:57:16.990995 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/prometheus-mysqld-exporter 0.15.1.20241206 e0c60a3989a2 2 months ago 280MB 2025-02-10 09:57:16.991009 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/prometheus-memcached-exporter 0.14.2.20241206 7773d2b6cd45 2 months ago 278MB 2025-02-10 09:57:16.991023 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/ironic-inspector 12.1.1.20241206 d7c1cac167d7 2 months ago 921MB 2025-02-10 09:57:16.991037 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/horizon 24.0.1.20241206 5c1f402d9b70 2 months ago 1.05GB 2025-02-10 09:57:16.991063 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-vswitchd 3.3.0.20241206 d1215d779ea3 2 months ago 265MB 2025-02-10 09:57:16.991078 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/redis 6.0.16.20241206 4318b679d5fe 2 months ago 254MB 2025-02-10 09:57:16.991091 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/redis-sentinel 6.0.16.20241206 3af600211be1 2 months ago 254MB 2025-02-10 09:57:16.991105 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/barbican-api 18.0.1.20241206 82e31f05da66 2 months ago 897MB 2025-02-10 09:57:16.991119 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-db-server 3.3.0.20241206 48764adcaea2 2 months ago 265MB 2025-02-10 09:57:16.991137 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/barbican-worker 18.0.1.20241206 238e19c784a2 2 months ago 898MB 2025-02-10 09:57:16.991152 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/barbican-keystone-listener 18.0.1.20241206 f407f4b04c09 2 months ago 898MB 2025-02-10 09:57:16.991166 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/placement-api 11.0.0.20241206 d96ed8b2c79e 2 months ago 883MB 2025-02-10 09:57:16.991181 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/neutron-server 24.0.2.20241206 43980b38b0ed 2 months ago 1.05GB 2025-02-10 09:57:16.991195 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/ironic-neutron-agent 24.0.2.20241206 84c326094fe2 2 months ago 1.04GB 2025-02-10 09:57:16.991222 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/glance-api 28.1.1.20241206 4b14125cb067 2 months ago 984MB 2025-02-10 09:57:16.991238 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/ironic-api 24.1.4.20241206 6feffb4e5928 2 months ago 962MB 2025-02-10 09:57:16.991251 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/ironic-pxe 24.1.4.20241206 e4fbcf866faa 2 months ago 1.02GB 2025-02-10 09:57:16.991265 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/ironic-conductor 24.1.4.20241206 b1e879303728 2 months ago 1.21GB 2025-02-10 09:57:16.991279 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/nova-novncproxy 29.2.1.20241206 b28acfc2c482 2 months ago 1.2GB 2025-02-10 09:57:16.991293 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/nova-api 29.2.1.20241206 0aba257846a7 2 months ago 1.1GB 2025-02-10 09:57:16.991306 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/nova-scheduler 29.2.1.20241206 5f254a0b9978 2 months ago 1.1GB 2025-02-10 09:57:16.991327 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/nova-conductor 29.2.1.20241206 6ff8c71e89fb 2 months ago 1.1GB 2025-02-10 09:57:16.991341 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/nova-compute-ironic 29.2.1.20241206 c99c6cec1f66 2 months ago 1.11GB 2025-02-10 09:57:16.991357 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/magnum-conductor 18.0.1.20241206 84101c7a637d 2 months ago 1.01GB 2025-02-10 09:57:16.991372 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/magnum-api 18.0.1.20241206 f2f3b84fdc7f 2 months ago 1.01GB 2025-02-10 09:57:16.991385 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/octavia-health-manager 14.0.1.20241206 7b79d4aa02e2 2 months ago 929MB 2025-02-10 09:57:16.991399 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/octavia-housekeeping 14.0.1.20241206 e800cf9a8411 2 months ago 929MB 2025-02-10 09:57:16.991416 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/octavia-worker 14.0.1.20241206 bb00a0af1e9f 2 months ago 929MB 2025-02-10 09:57:16.991430 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/octavia-driver-agent 14.0.1.20241206 d09db2d035d8 2 months ago 949MB 2025-02-10 09:57:16.991444 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/octavia-api 14.0.1.20241206 f607550a23ed 2 months ago 949MB 2025-02-10 09:57:16.991458 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/designate-central 18.0.1.20241206 2b70377db635 2 months ago 890MB 2025-02-10 09:57:16.991471 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/designate-mdns 18.0.1.20241206 22f27dd7d227 2 months ago 891MB 2025-02-10 09:57:16.991485 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/designate-backend-bind9 18.0.1.20241206 3532ad37b480 2 months ago 895MB 2025-02-10 09:57:16.991498 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/designate-api 18.0.1.20241206 e2e42737d129 2 months ago 891MB 2025-02-10 09:57:16.991512 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/designate-producer 18.0.1.20241206 56c812de20ff 2 months ago 891MB 2025-02-10 09:57:16.991526 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/designate-worker 18.0.1.20241206 5635177aaefd 2 months ago 895MB 2025-02-10 09:57:16.991539 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/cinder-api 24.2.1.20241206 8352396eef36 2 months ago 1.28GB 2025-02-10 09:57:16.991553 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/keystone-ssh 25.0.1.20241206 ed6287440b5f 2 months ago 936MB 2025-02-10 09:57:16.991566 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/cinder-scheduler 24.2.1.20241206 ba995bca2c49 2 months ago 1.28GB 2025-02-10 09:57:16.991580 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/keystone-fernet 25.0.1.20241206 cd74d4d03077 2 months ago 933MB 2025-02-10 09:57:16.991593 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/keystone 25.0.1.20241206 ee19c8288c92 2 months ago 957MB 2025-02-10 09:57:16.991607 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/ovn-northd 24.3.4.20241206 bed78408ad68 2 months ago 777MB 2025-02-10 09:57:16.991621 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/ovn-nb-db-server 24.3.4.20241206 26064100e1ea 2 months ago 776MB 2025-02-10 09:57:16.991635 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/ovn-controller 24.3.4.20241206 993d26b8baff 2 months ago 777MB 2025-02-10 09:57:16.991655 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/release/ovn-sb-db-server 24.3.4.20241206 c662c8967402 2 months ago 776MB 2025-02-10 09:57:17.247799 | orchestrator | nexus.testbed.osism.xyz:8193/osism/ceph-daemon 17.2.7 d8a5de0d58c4 10 months ago 1.38GB 2025-02-10 09:57:17.247965 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-02-10 09:57:17.254920 | orchestrator | + set -e 2025-02-10 09:57:17.255705 | orchestrator | + source /opt/manager-vars.sh 2025-02-10 09:57:17.255827 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-02-10 09:57:17.266494 | orchestrator | ++ NUMBER_OF_NODES=6 2025-02-10 09:57:17.266584 | orchestrator | ++ export CEPH_VERSION=quincy 2025-02-10 09:57:17.266605 | orchestrator | ++ CEPH_VERSION=quincy 2025-02-10 09:57:17.266619 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-02-10 09:57:17.266634 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-02-10 09:57:17.266648 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-02-10 09:57:17.266662 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-02-10 09:57:17.266676 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-02-10 09:57:17.266690 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-02-10 09:57:17.266704 | orchestrator | ++ export ARA=false 2025-02-10 09:57:17.266718 | orchestrator | ++ ARA=false 2025-02-10 09:57:17.266732 | orchestrator | ++ export TEMPEST=false 2025-02-10 09:57:17.266800 | orchestrator | ++ TEMPEST=false 2025-02-10 09:57:17.266816 | orchestrator | ++ export IS_ZUUL=true 2025-02-10 09:57:17.266830 | orchestrator | ++ IS_ZUUL=true 2025-02-10 09:57:17.266844 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.162 2025-02-10 09:57:17.266859 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.162 2025-02-10 09:57:17.266905 | orchestrator | ++ export EXTERNAL_API=false 2025-02-10 09:57:17.266931 | orchestrator | ++ EXTERNAL_API=false 2025-02-10 09:57:17.266954 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-02-10 09:57:17.266979 | orchestrator | ++ IMAGE_USER=ubuntu 2025-02-10 09:57:17.267004 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-02-10 09:57:17.267032 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-02-10 09:57:17.267058 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-02-10 09:57:17.267077 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-02-10 09:57:17.267092 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-02-10 09:57:17.267106 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-02-10 09:57:17.267135 | orchestrator | + set -e 2025-02-10 09:57:17.267443 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-02-10 09:57:17.267465 | orchestrator | ++ export INTERACTIVE=false 2025-02-10 09:57:17.267480 | orchestrator | ++ INTERACTIVE=false 2025-02-10 09:57:17.267494 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-02-10 09:57:17.267508 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-02-10 09:57:17.267521 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-02-10 09:57:17.267541 | orchestrator | +++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2025-02-10 09:57:17.302851 | orchestrator | 2025-02-10 09:57:17.992663 | orchestrator | # Ceph status 2025-02-10 09:57:17.992825 | orchestrator | 2025-02-10 09:57:17.992847 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-02-10 09:57:17.992885 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-02-10 09:57:17.992900 | orchestrator | + echo 2025-02-10 09:57:17.992915 | orchestrator | + echo '# Ceph status' 2025-02-10 09:57:17.992929 | orchestrator | + echo 2025-02-10 09:57:17.992943 | orchestrator | + ceph -s 2025-02-10 09:57:17.992981 | orchestrator | cluster: 2025-02-10 09:57:18.046011 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-02-10 09:57:18.046248 | orchestrator | health: HEALTH_OK 2025-02-10 09:57:18.046286 | orchestrator | 2025-02-10 09:57:18.046316 | orchestrator | services: 2025-02-10 09:57:18.046344 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 32m) 2025-02-10 09:57:18.046382 | orchestrator | mgr: testbed-node-0(active, since 19m), standbys: testbed-node-1, testbed-node-2 2025-02-10 09:57:18.046411 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-02-10 09:57:18.046436 | orchestrator | osd: 6 osds: 6 up (since 28m), 6 in (since 29m) 2025-02-10 09:57:18.046463 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-02-10 09:57:18.046491 | orchestrator | 2025-02-10 09:57:18.046519 | orchestrator | data: 2025-02-10 09:57:18.046546 | orchestrator | volumes: 1/1 healthy 2025-02-10 09:57:18.046574 | orchestrator | pools: 14 pools, 401 pgs 2025-02-10 09:57:18.046601 | orchestrator | objects: 519 objects, 2.2 GiB 2025-02-10 09:57:18.046630 | orchestrator | usage: 8.4 GiB used, 111 GiB / 120 GiB avail 2025-02-10 09:57:18.046657 | orchestrator | pgs: 401 active+clean 2025-02-10 09:57:18.046685 | orchestrator | 2025-02-10 09:57:18.046736 | orchestrator | 2025-02-10 09:57:18.637645 | orchestrator | # Ceph versions 2025-02-10 09:57:18.637789 | orchestrator | 2025-02-10 09:57:18.637798 | orchestrator | + echo 2025-02-10 09:57:18.637803 | orchestrator | + echo '# Ceph versions' 2025-02-10 09:57:18.637808 | orchestrator | + echo 2025-02-10 09:57:18.637813 | orchestrator | + ceph versions 2025-02-10 09:57:18.637829 | orchestrator | { 2025-02-10 09:57:18.670547 | orchestrator | "mon": { 2025-02-10 09:57:18.670651 | orchestrator | "ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)": 3 2025-02-10 09:57:18.670667 | orchestrator | }, 2025-02-10 09:57:18.670676 | orchestrator | "mgr": { 2025-02-10 09:57:18.670686 | orchestrator | "ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)": 3 2025-02-10 09:57:18.670694 | orchestrator | }, 2025-02-10 09:57:18.670704 | orchestrator | "osd": { 2025-02-10 09:57:18.670713 | orchestrator | "ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)": 6 2025-02-10 09:57:18.670722 | orchestrator | }, 2025-02-10 09:57:18.670731 | orchestrator | "mds": { 2025-02-10 09:57:18.670764 | orchestrator | "ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)": 3 2025-02-10 09:57:18.670773 | orchestrator | }, 2025-02-10 09:57:18.670782 | orchestrator | "rgw": { 2025-02-10 09:57:18.670791 | orchestrator | "ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)": 3 2025-02-10 09:57:18.670800 | orchestrator | }, 2025-02-10 09:57:18.670809 | orchestrator | "overall": { 2025-02-10 09:57:18.670819 | orchestrator | "ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)": 18 2025-02-10 09:57:18.670828 | orchestrator | } 2025-02-10 09:57:18.670837 | orchestrator | } 2025-02-10 09:57:18.670862 | orchestrator | 2025-02-10 09:57:19.170359 | orchestrator | # Ceph OSD tree 2025-02-10 09:57:19.170524 | orchestrator | 2025-02-10 09:57:19.170560 | orchestrator | + echo 2025-02-10 09:57:19.170584 | orchestrator | + echo '# Ceph OSD tree' 2025-02-10 09:57:19.170607 | orchestrator | + echo 2025-02-10 09:57:19.170632 | orchestrator | + ceph osd df tree 2025-02-10 09:57:19.170670 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-02-10 09:57:19.198132 | orchestrator | -1 0.11691 - 120 GiB 8.4 GiB 6.7 GiB 0 B 1.8 GiB 111 GiB 7.03 1.00 - root default 2025-02-10 09:57:19.198249 | orchestrator | -3 0.03897 - 40 GiB 2.8 GiB 2.2 GiB 0 B 600 MiB 37 GiB 7.03 1.00 - host testbed-node-3 2025-02-10 09:57:19.198267 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.0 GiB 0 B 298 MiB 19 GiB 6.55 0.93 209 up osd.1 2025-02-10 09:57:19.198302 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.2 GiB 0 B 302 MiB 18 GiB 7.51 1.07 181 up osd.3 2025-02-10 09:57:19.198329 | orchestrator | -5 0.03897 - 40 GiB 2.8 GiB 2.2 GiB 0 B 600 MiB 37 GiB 7.03 1.00 - host testbed-node-4 2025-02-10 09:57:19.198343 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.2 GiB 0 B 298 MiB 18 GiB 7.49 1.07 194 up osd.0 2025-02-10 09:57:19.198357 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.0 GiB 0 B 302 MiB 19 GiB 6.57 0.93 198 up osd.5 2025-02-10 09:57:19.198371 | orchestrator | -7 0.03897 - 40 GiB 2.8 GiB 2.2 GiB 0 B 600 MiB 37 GiB 7.03 1.00 - host testbed-node-5 2025-02-10 09:57:19.198385 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.6 GiB 1.3 GiB 0 B 298 MiB 18 GiB 8.16 1.16 198 up osd.2 2025-02-10 09:57:19.198398 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.2 GiB 906 MiB 0 B 302 MiB 19 GiB 5.91 0.84 190 up osd.4 2025-02-10 09:57:19.198412 | orchestrator | TOTAL 120 GiB 8.4 GiB 6.7 GiB 0 B 1.8 GiB 111 GiB 7.03 2025-02-10 09:57:19.198426 | orchestrator | MIN/MAX VAR: 0.84/1.16 STDDEV: 0.75 2025-02-10 09:57:19.198461 | orchestrator | 2025-02-10 09:57:19.767882 | orchestrator | # Ceph monitor status 2025-02-10 09:57:19.768026 | orchestrator | 2025-02-10 09:57:19.768055 | orchestrator | + echo 2025-02-10 09:57:19.768803 | orchestrator | + echo '# Ceph monitor status' 2025-02-10 09:57:19.768832 | orchestrator | + echo 2025-02-10 09:57:19.768853 | orchestrator | + ceph mon stat 2025-02-10 09:57:19.768901 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {1}, election epoch 6, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-02-10 09:57:19.802226 | orchestrator | 2025-02-10 09:57:19.803506 | orchestrator | # Ceph quorum status 2025-02-10 09:57:19.803546 | orchestrator | 2025-02-10 09:57:19.803561 | orchestrator | + echo 2025-02-10 09:57:19.803584 | orchestrator | + echo '# Ceph quorum status' 2025-02-10 09:57:19.803608 | orchestrator | + echo 2025-02-10 09:57:19.803640 | orchestrator | + jq 2025-02-10 09:57:20.425130 | orchestrator | + ceph quorum_status 2025-02-10 09:57:20.425278 | orchestrator | { 2025-02-10 09:57:21.012903 | orchestrator | "election_epoch": 6, 2025-02-10 09:57:21.013049 | orchestrator | "quorum": [ 2025-02-10 09:57:21.013072 | orchestrator | 0, 2025-02-10 09:57:21.013090 | orchestrator | 1, 2025-02-10 09:57:21.013108 | orchestrator | 2 2025-02-10 09:57:21.013126 | orchestrator | ], 2025-02-10 09:57:21.013143 | orchestrator | "quorum_names": [ 2025-02-10 09:57:21.013161 | orchestrator | "testbed-node-0", 2025-02-10 09:57:21.013177 | orchestrator | "testbed-node-1", 2025-02-10 09:57:21.013195 | orchestrator | "testbed-node-2" 2025-02-10 09:57:21.013212 | orchestrator | ], 2025-02-10 09:57:21.013229 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-02-10 09:57:21.013247 | orchestrator | "quorum_age": 1943, 2025-02-10 09:57:21.013264 | orchestrator | "features": { 2025-02-10 09:57:21.013280 | orchestrator | "quorum_con": "4540138320759226367", 2025-02-10 09:57:21.013297 | orchestrator | "quorum_mon": [ 2025-02-10 09:57:21.013328 | orchestrator | "kraken", 2025-02-10 09:57:21.013345 | orchestrator | "luminous", 2025-02-10 09:57:21.013363 | orchestrator | "mimic", 2025-02-10 09:57:21.013380 | orchestrator | "osdmap-prune", 2025-02-10 09:57:21.013397 | orchestrator | "nautilus", 2025-02-10 09:57:21.013414 | orchestrator | "octopus", 2025-02-10 09:57:21.013432 | orchestrator | "pacific", 2025-02-10 09:57:21.013449 | orchestrator | "elector-pinging", 2025-02-10 09:57:21.013466 | orchestrator | "quincy" 2025-02-10 09:57:21.013484 | orchestrator | ] 2025-02-10 09:57:21.013501 | orchestrator | }, 2025-02-10 09:57:21.013519 | orchestrator | "monmap": { 2025-02-10 09:57:21.013537 | orchestrator | "epoch": 1, 2025-02-10 09:57:21.013554 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-02-10 09:57:21.013572 | orchestrator | "modified": "2025-02-10T09:24:11.338395Z", 2025-02-10 09:57:21.013590 | orchestrator | "created": "2025-02-10T09:24:11.338395Z", 2025-02-10 09:57:21.013608 | orchestrator | "min_mon_release": 17, 2025-02-10 09:57:21.013626 | orchestrator | "min_mon_release_name": "quincy", 2025-02-10 09:57:21.013643 | orchestrator | "election_strategy": 1, 2025-02-10 09:57:21.013661 | orchestrator | "disallowed_leaders: ": "", 2025-02-10 09:57:21.013678 | orchestrator | "stretch_mode": false, 2025-02-10 09:57:21.013694 | orchestrator | "tiebreaker_mon": "", 2025-02-10 09:57:21.013712 | orchestrator | "removed_ranks: ": "1", 2025-02-10 09:57:21.013729 | orchestrator | "features": { 2025-02-10 09:57:21.013803 | orchestrator | "persistent": [ 2025-02-10 09:57:21.013821 | orchestrator | "kraken", 2025-02-10 09:57:21.013836 | orchestrator | "luminous", 2025-02-10 09:57:21.013852 | orchestrator | "mimic", 2025-02-10 09:57:21.013868 | orchestrator | "osdmap-prune", 2025-02-10 09:57:21.013884 | orchestrator | "nautilus", 2025-02-10 09:57:21.013901 | orchestrator | "octopus", 2025-02-10 09:57:21.013918 | orchestrator | "pacific", 2025-02-10 09:57:21.013934 | orchestrator | "elector-pinging", 2025-02-10 09:57:21.013951 | orchestrator | "quincy" 2025-02-10 09:57:21.013968 | orchestrator | ], 2025-02-10 09:57:21.013984 | orchestrator | "optional": [] 2025-02-10 09:57:21.014001 | orchestrator | }, 2025-02-10 09:57:21.014064 | orchestrator | "mons": [ 2025-02-10 09:57:21.014088 | orchestrator | { 2025-02-10 09:57:21.014107 | orchestrator | "rank": 0, 2025-02-10 09:57:21.014126 | orchestrator | "name": "testbed-node-0", 2025-02-10 09:57:21.014144 | orchestrator | "public_addrs": { 2025-02-10 09:57:21.014163 | orchestrator | "addrvec": [ 2025-02-10 09:57:21.014183 | orchestrator | { 2025-02-10 09:57:21.014196 | orchestrator | "type": "v2", 2025-02-10 09:57:21.014208 | orchestrator | "addr": "192.168.16.10:3300", 2025-02-10 09:57:21.014225 | orchestrator | "nonce": 0 2025-02-10 09:57:21.014243 | orchestrator | }, 2025-02-10 09:57:21.014259 | orchestrator | { 2025-02-10 09:57:21.014276 | orchestrator | "type": "v1", 2025-02-10 09:57:21.014293 | orchestrator | "addr": "192.168.16.10:6789", 2025-02-10 09:57:21.014310 | orchestrator | "nonce": 0 2025-02-10 09:57:21.014326 | orchestrator | } 2025-02-10 09:57:21.014372 | orchestrator | ] 2025-02-10 09:57:21.014389 | orchestrator | }, 2025-02-10 09:57:21.014405 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-02-10 09:57:21.014422 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-02-10 09:57:21.014440 | orchestrator | "priority": 0, 2025-02-10 09:57:21.014457 | orchestrator | "weight": 0, 2025-02-10 09:57:21.014474 | orchestrator | "crush_location": "{}" 2025-02-10 09:57:21.014491 | orchestrator | }, 2025-02-10 09:57:21.014508 | orchestrator | { 2025-02-10 09:57:21.014525 | orchestrator | "rank": 1, 2025-02-10 09:57:21.014541 | orchestrator | "name": "testbed-node-1", 2025-02-10 09:57:21.014558 | orchestrator | "public_addrs": { 2025-02-10 09:57:21.014580 | orchestrator | "addrvec": [ 2025-02-10 09:57:21.014598 | orchestrator | { 2025-02-10 09:57:21.014615 | orchestrator | "type": "v2", 2025-02-10 09:57:21.014632 | orchestrator | "addr": "192.168.16.11:3300", 2025-02-10 09:57:21.014644 | orchestrator | "nonce": 0 2025-02-10 09:57:21.014654 | orchestrator | }, 2025-02-10 09:57:21.014667 | orchestrator | { 2025-02-10 09:57:21.014677 | orchestrator | "type": "v1", 2025-02-10 09:57:21.014687 | orchestrator | "addr": "192.168.16.11:6789", 2025-02-10 09:57:21.014697 | orchestrator | "nonce": 0 2025-02-10 09:57:21.014707 | orchestrator | } 2025-02-10 09:57:21.014717 | orchestrator | ] 2025-02-10 09:57:21.014727 | orchestrator | }, 2025-02-10 09:57:21.014759 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-02-10 09:57:21.014778 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-02-10 09:57:21.014795 | orchestrator | "priority": 0, 2025-02-10 09:57:21.014811 | orchestrator | "weight": 0, 2025-02-10 09:57:21.014827 | orchestrator | "crush_location": "{}" 2025-02-10 09:57:21.014843 | orchestrator | }, 2025-02-10 09:57:21.014859 | orchestrator | { 2025-02-10 09:57:21.014877 | orchestrator | "rank": 2, 2025-02-10 09:57:21.014893 | orchestrator | "name": "testbed-node-2", 2025-02-10 09:57:21.014909 | orchestrator | "public_addrs": { 2025-02-10 09:57:21.014924 | orchestrator | "addrvec": [ 2025-02-10 09:57:21.014942 | orchestrator | { 2025-02-10 09:57:21.014959 | orchestrator | "type": "v2", 2025-02-10 09:57:21.014976 | orchestrator | "addr": "192.168.16.12:3300", 2025-02-10 09:57:21.014986 | orchestrator | "nonce": 0 2025-02-10 09:57:21.014996 | orchestrator | }, 2025-02-10 09:57:21.015006 | orchestrator | { 2025-02-10 09:57:21.015016 | orchestrator | "type": "v1", 2025-02-10 09:57:21.015026 | orchestrator | "addr": "192.168.16.12:6789", 2025-02-10 09:57:21.015036 | orchestrator | "nonce": 0 2025-02-10 09:57:21.015046 | orchestrator | } 2025-02-10 09:57:21.015055 | orchestrator | ] 2025-02-10 09:57:21.015065 | orchestrator | }, 2025-02-10 09:57:21.015075 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-02-10 09:57:21.015085 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-02-10 09:57:21.015095 | orchestrator | "priority": 0, 2025-02-10 09:57:21.015105 | orchestrator | "weight": 0, 2025-02-10 09:57:21.015115 | orchestrator | "crush_location": "{}" 2025-02-10 09:57:21.015125 | orchestrator | } 2025-02-10 09:57:21.015135 | orchestrator | ] 2025-02-10 09:57:21.015145 | orchestrator | } 2025-02-10 09:57:21.015155 | orchestrator | } 2025-02-10 09:57:21.015164 | orchestrator | + echo 2025-02-10 09:57:21.015174 | orchestrator | 2025-02-10 09:57:21.015184 | orchestrator | # Ceph free space status 2025-02-10 09:57:21.015194 | orchestrator | 2025-02-10 09:57:21.015204 | orchestrator | + echo '# Ceph free space status' 2025-02-10 09:57:21.015214 | orchestrator | + echo 2025-02-10 09:57:21.015224 | orchestrator | + ceph df 2025-02-10 09:57:21.015254 | orchestrator | --- RAW STORAGE --- 2025-02-10 09:57:21.055306 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-02-10 09:57:21.055449 | orchestrator | hdd 120 GiB 111 GiB 8.4 GiB 8.4 GiB 7.03 2025-02-10 09:57:21.055479 | orchestrator | TOTAL 120 GiB 111 GiB 8.4 GiB 8.4 GiB 7.03 2025-02-10 09:57:21.055505 | orchestrator | 2025-02-10 09:57:21.055529 | orchestrator | --- POOLS --- 2025-02-10 09:57:21.055555 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-02-10 09:57:21.055581 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2025-02-10 09:57:21.055598 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-02-10 09:57:21.055613 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-02-10 09:57:21.055657 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-02-10 09:57:21.055676 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-02-10 09:57:21.055700 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-02-10 09:57:21.055721 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-02-10 09:57:21.055818 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-02-10 09:57:21.055844 | orchestrator | .rgw.root 9 32 3.7 KiB 8 64 KiB 0 52 GiB 2025-02-10 09:57:21.055867 | orchestrator | backups 10 32 19 B 1 12 KiB 0 35 GiB 2025-02-10 09:57:21.055889 | orchestrator | volumes 11 32 19 B 1 12 KiB 0 35 GiB 2025-02-10 09:57:21.055914 | orchestrator | images 12 32 2.2 GiB 298 6.7 GiB 6.01 35 GiB 2025-02-10 09:57:21.055955 | orchestrator | metrics 13 32 19 B 1 12 KiB 0 35 GiB 2025-02-10 09:57:21.055978 | orchestrator | vms 14 32 19 B 1 12 KiB 0 35 GiB 2025-02-10 09:57:21.056021 | orchestrator | ++ semver 8.1.0 5.0.0 2025-02-10 09:57:21.108510 | orchestrator | + [[ 1 -eq -1 ]] 2025-02-10 09:57:22.953693 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-02-10 09:57:22.953874 | orchestrator | + osism apply facts 2025-02-10 09:57:22.953912 | orchestrator | 2025-02-10 09:57:22 | INFO  | Task f533c87c-c85b-4ef2-b899-72ce19fc3c01 (facts) was prepared for execution. 2025-02-10 09:57:22.955274 | orchestrator | 2025-02-10 09:57:22 | INFO  | It takes a moment until task f533c87c-c85b-4ef2-b899-72ce19fc3c01 (facts) has been started and output is visible here. 2025-02-10 09:57:27.022158 | orchestrator | 2025-02-10 09:57:27.022970 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-02-10 09:57:27.023325 | orchestrator | 2025-02-10 09:57:27.024250 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-02-10 09:57:27.025172 | orchestrator | Monday 10 February 2025 09:57:27 +0000 (0:00:00.272) 0:00:00.272 ******* 2025-02-10 09:57:27.819921 | orchestrator | ok: [testbed-manager] 2025-02-10 09:57:28.761156 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:57:28.761693 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:57:28.763009 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:57:28.764163 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:57:28.764555 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:57:28.765460 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:57:28.765865 | orchestrator | 2025-02-10 09:57:28.768702 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-02-10 09:57:28.989677 | orchestrator | Monday 10 February 2025 09:57:28 +0000 (0:00:01.740) 0:00:02.013 ******* 2025-02-10 09:57:28.989868 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:57:29.106457 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:57:29.197701 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:57:29.295174 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:57:29.376020 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:57:30.214997 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:57:30.215340 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:57:30.216297 | orchestrator | 2025-02-10 09:57:30.217511 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-02-10 09:57:30.218339 | orchestrator | 2025-02-10 09:57:30.219041 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-02-10 09:57:30.219938 | orchestrator | Monday 10 February 2025 09:57:30 +0000 (0:00:01.457) 0:00:03.470 ******* 2025-02-10 09:57:35.295200 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:57:35.295804 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:57:35.295821 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:57:35.295831 | orchestrator | ok: [testbed-manager] 2025-02-10 09:57:35.296435 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:57:35.297027 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:57:35.297389 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:57:35.298118 | orchestrator | 2025-02-10 09:57:35.298485 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-02-10 09:57:35.299532 | orchestrator | 2025-02-10 09:57:35.300382 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-02-10 09:57:35.300794 | orchestrator | Monday 10 February 2025 09:57:35 +0000 (0:00:05.081) 0:00:08.552 ******* 2025-02-10 09:57:35.523804 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:57:35.624782 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:57:35.731912 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:57:35.817332 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:57:35.896291 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:57:35.933074 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:57:35.933177 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:57:35.933435 | orchestrator | 2025-02-10 09:57:35.933905 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:57:35.934134 | orchestrator | 2025-02-10 09:57:35 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:57:35.934710 | orchestrator | 2025-02-10 09:57:35 | INFO  | Please wait and do not abort execution. 2025-02-10 09:57:35.934782 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:57:35.935451 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:57:35.935667 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:57:35.935979 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:57:35.936206 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:57:35.936485 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:57:35.936810 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:57:35.937136 | orchestrator | 2025-02-10 09:57:35.937306 | orchestrator | Monday 10 February 2025 09:57:35 +0000 (0:00:00.638) 0:00:09.191 ******* 2025-02-10 09:57:35.937701 | orchestrator | =============================================================================== 2025-02-10 09:57:35.937844 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.08s 2025-02-10 09:57:35.937871 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.74s 2025-02-10 09:57:35.938132 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.46s 2025-02-10 09:57:35.938442 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.64s 2025-02-10 09:57:36.690869 | orchestrator | + osism validate ceph-mons 2025-02-10 09:57:59.712687 | orchestrator | 2025-02-10 09:57:59.712878 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-02-10 09:57:59.712912 | orchestrator | 2025-02-10 09:57:59.712929 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-02-10 09:57:59.712944 | orchestrator | Monday 10 February 2025 09:57:42 +0000 (0:00:00.440) 0:00:00.440 ******* 2025-02-10 09:57:59.712958 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-02-10 09:57:59.712973 | orchestrator | 2025-02-10 09:57:59.712987 | orchestrator | TASK [Create report output directory] ****************************************** 2025-02-10 09:57:59.713001 | orchestrator | Monday 10 February 2025 09:57:43 +0000 (0:00:00.707) 0:00:01.147 ******* 2025-02-10 09:57:59.713015 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-02-10 09:57:59.713050 | orchestrator | 2025-02-10 09:57:59.713065 | orchestrator | TASK [Define report vars] ****************************************************** 2025-02-10 09:57:59.713079 | orchestrator | Monday 10 February 2025 09:57:44 +0000 (0:00:00.969) 0:00:02.116 ******* 2025-02-10 09:57:59.713093 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:57:59.713108 | orchestrator | 2025-02-10 09:57:59.713122 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-02-10 09:57:59.713136 | orchestrator | Monday 10 February 2025 09:57:44 +0000 (0:00:00.161) 0:00:02.278 ******* 2025-02-10 09:57:59.713150 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:57:59.713164 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:57:59.713179 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:57:59.713195 | orchestrator | 2025-02-10 09:57:59.713211 | orchestrator | TASK [Get container info] ****************************************************** 2025-02-10 09:57:59.713226 | orchestrator | Monday 10 February 2025 09:57:44 +0000 (0:00:00.463) 0:00:02.742 ******* 2025-02-10 09:57:59.713242 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:57:59.713257 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:57:59.713272 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:57:59.713288 | orchestrator | 2025-02-10 09:57:59.713313 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-02-10 09:57:59.713329 | orchestrator | Monday 10 February 2025 09:57:45 +0000 (0:00:01.183) 0:00:03.926 ******* 2025-02-10 09:57:59.713344 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:57:59.713360 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:57:59.713376 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:57:59.713391 | orchestrator | 2025-02-10 09:57:59.713407 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-02-10 09:57:59.713423 | orchestrator | Monday 10 February 2025 09:57:46 +0000 (0:00:00.293) 0:00:04.220 ******* 2025-02-10 09:57:59.713438 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:57:59.713453 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:57:59.713470 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:57:59.713485 | orchestrator | 2025-02-10 09:57:59.713501 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-02-10 09:57:59.713517 | orchestrator | Monday 10 February 2025 09:57:46 +0000 (0:00:00.539) 0:00:04.759 ******* 2025-02-10 09:57:59.713532 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:57:59.713546 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:57:59.713560 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:57:59.713573 | orchestrator | 2025-02-10 09:57:59.713587 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-02-10 09:57:59.713601 | orchestrator | Monday 10 February 2025 09:57:47 +0000 (0:00:00.365) 0:00:05.125 ******* 2025-02-10 09:57:59.713615 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:57:59.713629 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:57:59.713643 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:57:59.713656 | orchestrator | 2025-02-10 09:57:59.713670 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-02-10 09:57:59.713684 | orchestrator | Monday 10 February 2025 09:57:47 +0000 (0:00:00.329) 0:00:05.455 ******* 2025-02-10 09:57:59.713697 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:57:59.713733 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:57:59.713747 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:57:59.713761 | orchestrator | 2025-02-10 09:57:59.713775 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-02-10 09:57:59.713789 | orchestrator | Monday 10 February 2025 09:57:47 +0000 (0:00:00.311) 0:00:05.766 ******* 2025-02-10 09:57:59.713802 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:57:59.713816 | orchestrator | 2025-02-10 09:57:59.713830 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-02-10 09:57:59.713844 | orchestrator | Monday 10 February 2025 09:57:48 +0000 (0:00:00.765) 0:00:06.532 ******* 2025-02-10 09:57:59.713857 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:57:59.713871 | orchestrator | 2025-02-10 09:57:59.713892 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-02-10 09:57:59.713931 | orchestrator | Monday 10 February 2025 09:57:48 +0000 (0:00:00.273) 0:00:06.806 ******* 2025-02-10 09:57:59.713945 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:57:59.713959 | orchestrator | 2025-02-10 09:57:59.713972 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-02-10 09:57:59.713986 | orchestrator | Monday 10 February 2025 09:57:49 +0000 (0:00:00.292) 0:00:07.099 ******* 2025-02-10 09:57:59.714000 | orchestrator | 2025-02-10 09:57:59.714013 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-02-10 09:57:59.714112 | orchestrator | Monday 10 February 2025 09:57:49 +0000 (0:00:00.081) 0:00:07.180 ******* 2025-02-10 09:57:59.714130 | orchestrator | 2025-02-10 09:57:59.714144 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-02-10 09:57:59.714158 | orchestrator | Monday 10 February 2025 09:57:49 +0000 (0:00:00.072) 0:00:07.253 ******* 2025-02-10 09:57:59.714172 | orchestrator | 2025-02-10 09:57:59.714186 | orchestrator | TASK [Print report file information] ******************************************* 2025-02-10 09:57:59.714199 | orchestrator | Monday 10 February 2025 09:57:49 +0000 (0:00:00.079) 0:00:07.332 ******* 2025-02-10 09:57:59.714213 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:57:59.714227 | orchestrator | 2025-02-10 09:57:59.714241 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-02-10 09:57:59.714255 | orchestrator | Monday 10 February 2025 09:57:49 +0000 (0:00:00.257) 0:00:07.590 ******* 2025-02-10 09:57:59.714268 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:57:59.714282 | orchestrator | 2025-02-10 09:57:59.714311 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-02-10 09:58:03.069436 | orchestrator | Monday 10 February 2025 09:57:49 +0000 (0:00:00.294) 0:00:07.885 ******* 2025-02-10 09:58:03.069582 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:58:03.069605 | orchestrator | 2025-02-10 09:58:03.069629 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-02-10 09:58:03.069644 | orchestrator | Monday 10 February 2025 09:57:50 +0000 (0:00:00.127) 0:00:08.012 ******* 2025-02-10 09:58:03.069658 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:58:03.069672 | orchestrator | 2025-02-10 09:58:03.069686 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-02-10 09:58:03.069700 | orchestrator | Monday 10 February 2025 09:57:51 +0000 (0:00:01.782) 0:00:09.795 ******* 2025-02-10 09:58:03.069791 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:58:03.069805 | orchestrator | 2025-02-10 09:58:03.069839 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-02-10 09:58:03.069854 | orchestrator | Monday 10 February 2025 09:57:52 +0000 (0:00:00.300) 0:00:10.096 ******* 2025-02-10 09:58:03.069868 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:58:03.069882 | orchestrator | 2025-02-10 09:58:03.069896 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-02-10 09:58:03.069909 | orchestrator | Monday 10 February 2025 09:57:52 +0000 (0:00:00.358) 0:00:10.454 ******* 2025-02-10 09:58:03.069923 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:58:03.069937 | orchestrator | 2025-02-10 09:58:03.069950 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-02-10 09:58:03.069964 | orchestrator | Monday 10 February 2025 09:57:52 +0000 (0:00:00.220) 0:00:10.675 ******* 2025-02-10 09:58:03.069980 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:58:03.069996 | orchestrator | 2025-02-10 09:58:03.070011 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-02-10 09:58:03.070085 | orchestrator | Monday 10 February 2025 09:57:53 +0000 (0:00:00.254) 0:00:10.929 ******* 2025-02-10 09:58:03.070102 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:58:03.070117 | orchestrator | 2025-02-10 09:58:03.070131 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-02-10 09:58:03.070154 | orchestrator | Monday 10 February 2025 09:57:53 +0000 (0:00:00.132) 0:00:11.062 ******* 2025-02-10 09:58:03.070208 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:58:03.070225 | orchestrator | 2025-02-10 09:58:03.070239 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-02-10 09:58:03.070257 | orchestrator | Monday 10 February 2025 09:57:53 +0000 (0:00:00.128) 0:00:11.190 ******* 2025-02-10 09:58:03.070271 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:58:03.070285 | orchestrator | 2025-02-10 09:58:03.070300 | orchestrator | TASK [Gather status data] ****************************************************** 2025-02-10 09:58:03.070314 | orchestrator | Monday 10 February 2025 09:57:53 +0000 (0:00:00.121) 0:00:11.312 ******* 2025-02-10 09:58:03.070328 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:58:03.070342 | orchestrator | 2025-02-10 09:58:03.070356 | orchestrator | TASK [Set health test data] **************************************************** 2025-02-10 09:58:03.070369 | orchestrator | Monday 10 February 2025 09:57:54 +0000 (0:00:01.479) 0:00:12.791 ******* 2025-02-10 09:58:03.070387 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:58:03.070410 | orchestrator | 2025-02-10 09:58:03.070434 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-02-10 09:58:03.070457 | orchestrator | Monday 10 February 2025 09:57:55 +0000 (0:00:00.226) 0:00:13.018 ******* 2025-02-10 09:58:03.070479 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:58:03.070493 | orchestrator | 2025-02-10 09:58:03.070507 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-02-10 09:58:03.070520 | orchestrator | Monday 10 February 2025 09:57:55 +0000 (0:00:00.139) 0:00:13.158 ******* 2025-02-10 09:58:03.070534 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:58:03.070548 | orchestrator | 2025-02-10 09:58:03.070562 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-02-10 09:58:03.070576 | orchestrator | Monday 10 February 2025 09:57:55 +0000 (0:00:00.153) 0:00:13.311 ******* 2025-02-10 09:58:03.070589 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:58:03.070603 | orchestrator | 2025-02-10 09:58:03.070617 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-02-10 09:58:03.070631 | orchestrator | Monday 10 February 2025 09:57:55 +0000 (0:00:00.146) 0:00:13.457 ******* 2025-02-10 09:58:03.070644 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:58:03.070658 | orchestrator | 2025-02-10 09:58:03.070672 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-02-10 09:58:03.070686 | orchestrator | Monday 10 February 2025 09:57:55 +0000 (0:00:00.354) 0:00:13.812 ******* 2025-02-10 09:58:03.070700 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-02-10 09:58:03.070747 | orchestrator | 2025-02-10 09:58:03.070762 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-02-10 09:58:03.070776 | orchestrator | Monday 10 February 2025 09:57:56 +0000 (0:00:00.299) 0:00:14.111 ******* 2025-02-10 09:58:03.070790 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:58:03.070805 | orchestrator | 2025-02-10 09:58:03.070819 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-02-10 09:58:03.070833 | orchestrator | Monday 10 February 2025 09:57:56 +0000 (0:00:00.268) 0:00:14.380 ******* 2025-02-10 09:58:03.070847 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-02-10 09:58:03.070860 | orchestrator | 2025-02-10 09:58:03.070874 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-02-10 09:58:03.070888 | orchestrator | Monday 10 February 2025 09:57:58 +0000 (0:00:02.327) 0:00:16.707 ******* 2025-02-10 09:58:03.070902 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-02-10 09:58:03.070923 | orchestrator | 2025-02-10 09:58:03.070937 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-02-10 09:58:03.070951 | orchestrator | Monday 10 February 2025 09:57:59 +0000 (0:00:00.312) 0:00:17.019 ******* 2025-02-10 09:58:03.070965 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-02-10 09:58:03.070978 | orchestrator | 2025-02-10 09:58:03.071012 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-02-10 09:58:03.327415 | orchestrator | Monday 10 February 2025 09:57:59 +0000 (0:00:00.309) 0:00:17.329 ******* 2025-02-10 09:58:03.327527 | orchestrator | 2025-02-10 09:58:03.327543 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-02-10 09:58:03.327555 | orchestrator | Monday 10 February 2025 09:57:59 +0000 (0:00:00.108) 0:00:17.437 ******* 2025-02-10 09:58:03.327567 | orchestrator | 2025-02-10 09:58:03.327578 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-02-10 09:58:03.327590 | orchestrator | Monday 10 February 2025 09:57:59 +0000 (0:00:00.092) 0:00:17.529 ******* 2025-02-10 09:58:03.327601 | orchestrator | 2025-02-10 09:58:03.327612 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-02-10 09:58:03.327623 | orchestrator | Monday 10 February 2025 09:57:59 +0000 (0:00:00.098) 0:00:17.627 ******* 2025-02-10 09:58:03.327635 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-02-10 09:58:03.327646 | orchestrator | 2025-02-10 09:58:03.327657 | orchestrator | TASK [Print report file information] ******************************************* 2025-02-10 09:58:03.327669 | orchestrator | Monday 10 February 2025 09:58:01 +0000 (0:00:02.044) 0:00:19.672 ******* 2025-02-10 09:58:03.327680 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-02-10 09:58:03.327691 | orchestrator |  "msg": [ 2025-02-10 09:58:03.327751 | orchestrator |  "Validator run completed.", 2025-02-10 09:58:03.327797 | orchestrator |  "You can find the report file here:", 2025-02-10 09:58:03.327811 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-02-10T09:57:43+00:00-report.json", 2025-02-10 09:58:03.327823 | orchestrator |  "on the following host:", 2025-02-10 09:58:03.327836 | orchestrator |  "testbed-manager" 2025-02-10 09:58:03.327847 | orchestrator |  ] 2025-02-10 09:58:03.327859 | orchestrator | } 2025-02-10 09:58:03.327870 | orchestrator | 2025-02-10 09:58:03.327881 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:58:03.327894 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-02-10 09:58:03.327907 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:58:03.327919 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:58:03.327935 | orchestrator | 2025-02-10 09:58:03.327948 | orchestrator | Monday 10 February 2025 09:58:02 +0000 (0:00:00.872) 0:00:20.545 ******* 2025-02-10 09:58:03.327961 | orchestrator | =============================================================================== 2025-02-10 09:58:03.327974 | orchestrator | Aggregate test results step one ----------------------------------------- 2.33s 2025-02-10 09:58:03.327987 | orchestrator | Write report file ------------------------------------------------------- 2.04s 2025-02-10 09:58:03.327999 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.78s 2025-02-10 09:58:03.328013 | orchestrator | Gather status data ------------------------------------------------------ 1.48s 2025-02-10 09:58:03.328033 | orchestrator | Get container info ------------------------------------------------------ 1.18s 2025-02-10 09:58:03.328047 | orchestrator | Create report output directory ------------------------------------------ 0.97s 2025-02-10 09:58:03.328059 | orchestrator | Print report file information ------------------------------------------- 0.87s 2025-02-10 09:58:03.328072 | orchestrator | Aggregate test results step one ----------------------------------------- 0.77s 2025-02-10 09:58:03.328085 | orchestrator | Get timestamp for report file ------------------------------------------- 0.71s 2025-02-10 09:58:03.328098 | orchestrator | Set test result to passed if container is existing ---------------------- 0.54s 2025-02-10 09:58:03.328111 | orchestrator | Prepare test data for container existance test -------------------------- 0.46s 2025-02-10 09:58:03.328123 | orchestrator | Prepare test data ------------------------------------------------------- 0.37s 2025-02-10 09:58:03.328156 | orchestrator | Fail quorum test if not all monitors are in quorum ---------------------- 0.36s 2025-02-10 09:58:03.328167 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.35s 2025-02-10 09:58:03.328178 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.33s 2025-02-10 09:58:03.328189 | orchestrator | Aggregate test results step two ----------------------------------------- 0.31s 2025-02-10 09:58:03.328200 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.31s 2025-02-10 09:58:03.328215 | orchestrator | Aggregate test results step three --------------------------------------- 0.31s 2025-02-10 09:58:03.328226 | orchestrator | Set quorum test data ---------------------------------------------------- 0.30s 2025-02-10 09:58:03.328237 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.30s 2025-02-10 09:58:03.328265 | orchestrator | + osism validate ceph-mgrs 2025-02-10 09:58:25.022145 | orchestrator | 2025-02-10 09:58:25.022241 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-02-10 09:58:25.022250 | orchestrator | 2025-02-10 09:58:25.022256 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-02-10 09:58:25.022261 | orchestrator | Monday 10 February 2025 09:58:09 +0000 (0:00:00.435) 0:00:00.435 ******* 2025-02-10 09:58:25.022267 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-02-10 09:58:25.022273 | orchestrator | 2025-02-10 09:58:25.022278 | orchestrator | TASK [Create report output directory] ****************************************** 2025-02-10 09:58:25.022283 | orchestrator | Monday 10 February 2025 09:58:09 +0000 (0:00:00.702) 0:00:01.138 ******* 2025-02-10 09:58:25.022288 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-02-10 09:58:25.022293 | orchestrator | 2025-02-10 09:58:25.022298 | orchestrator | TASK [Define report vars] ****************************************************** 2025-02-10 09:58:25.022303 | orchestrator | Monday 10 February 2025 09:58:10 +0000 (0:00:00.983) 0:00:02.122 ******* 2025-02-10 09:58:25.022308 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:58:25.022314 | orchestrator | 2025-02-10 09:58:25.022319 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-02-10 09:58:25.022324 | orchestrator | Monday 10 February 2025 09:58:10 +0000 (0:00:00.119) 0:00:02.241 ******* 2025-02-10 09:58:25.022329 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:58:25.022334 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:58:25.022339 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:58:25.022344 | orchestrator | 2025-02-10 09:58:25.022351 | orchestrator | TASK [Get container info] ****************************************************** 2025-02-10 09:58:25.022360 | orchestrator | Monday 10 February 2025 09:58:11 +0000 (0:00:00.495) 0:00:02.736 ******* 2025-02-10 09:58:25.022369 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:58:25.022377 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:58:25.022385 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:58:25.022392 | orchestrator | 2025-02-10 09:58:25.022401 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-02-10 09:58:25.022409 | orchestrator | Monday 10 February 2025 09:58:12 +0000 (0:00:01.169) 0:00:03.906 ******* 2025-02-10 09:58:25.022417 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:58:25.022425 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:58:25.022433 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:58:25.022441 | orchestrator | 2025-02-10 09:58:25.022449 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-02-10 09:58:25.022457 | orchestrator | Monday 10 February 2025 09:58:12 +0000 (0:00:00.339) 0:00:04.245 ******* 2025-02-10 09:58:25.022465 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:58:25.022473 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:58:25.022481 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:58:25.022489 | orchestrator | 2025-02-10 09:58:25.022497 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-02-10 09:58:25.022506 | orchestrator | Monday 10 February 2025 09:58:13 +0000 (0:00:00.566) 0:00:04.811 ******* 2025-02-10 09:58:25.022539 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:58:25.022548 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:58:25.022557 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:58:25.022566 | orchestrator | 2025-02-10 09:58:25.022576 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-02-10 09:58:25.022585 | orchestrator | Monday 10 February 2025 09:58:13 +0000 (0:00:00.357) 0:00:05.169 ******* 2025-02-10 09:58:25.022595 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:58:25.022602 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:58:25.022607 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:58:25.022612 | orchestrator | 2025-02-10 09:58:25.022617 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-02-10 09:58:25.022622 | orchestrator | Monday 10 February 2025 09:58:14 +0000 (0:00:00.321) 0:00:05.490 ******* 2025-02-10 09:58:25.022627 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:58:25.022632 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:58:25.022637 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:58:25.022642 | orchestrator | 2025-02-10 09:58:25.022647 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-02-10 09:58:25.022652 | orchestrator | Monday 10 February 2025 09:58:14 +0000 (0:00:00.344) 0:00:05.834 ******* 2025-02-10 09:58:25.022657 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:58:25.022662 | orchestrator | 2025-02-10 09:58:25.022667 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-02-10 09:58:25.022672 | orchestrator | Monday 10 February 2025 09:58:15 +0000 (0:00:00.766) 0:00:06.601 ******* 2025-02-10 09:58:25.022677 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:58:25.022682 | orchestrator | 2025-02-10 09:58:25.022704 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-02-10 09:58:25.022710 | orchestrator | Monday 10 February 2025 09:58:15 +0000 (0:00:00.288) 0:00:06.889 ******* 2025-02-10 09:58:25.022715 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:58:25.022720 | orchestrator | 2025-02-10 09:58:25.022725 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-02-10 09:58:25.022741 | orchestrator | Monday 10 February 2025 09:58:15 +0000 (0:00:00.273) 0:00:07.163 ******* 2025-02-10 09:58:25.022746 | orchestrator | 2025-02-10 09:58:25.022752 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-02-10 09:58:25.022757 | orchestrator | Monday 10 February 2025 09:58:15 +0000 (0:00:00.077) 0:00:07.240 ******* 2025-02-10 09:58:25.022762 | orchestrator | 2025-02-10 09:58:25.022767 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-02-10 09:58:25.022772 | orchestrator | Monday 10 February 2025 09:58:16 +0000 (0:00:00.086) 0:00:07.327 ******* 2025-02-10 09:58:25.022777 | orchestrator | 2025-02-10 09:58:25.022782 | orchestrator | TASK [Print report file information] ******************************************* 2025-02-10 09:58:25.022787 | orchestrator | Monday 10 February 2025 09:58:16 +0000 (0:00:00.081) 0:00:07.408 ******* 2025-02-10 09:58:25.022792 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:58:25.022797 | orchestrator | 2025-02-10 09:58:25.022802 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-02-10 09:58:25.022809 | orchestrator | Monday 10 February 2025 09:58:16 +0000 (0:00:00.272) 0:00:07.680 ******* 2025-02-10 09:58:25.022816 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:58:25.022824 | orchestrator | 2025-02-10 09:58:25.022842 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-02-10 09:58:25.458549 | orchestrator | Monday 10 February 2025 09:58:16 +0000 (0:00:00.253) 0:00:07.934 ******* 2025-02-10 09:58:25.458682 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:58:25.458747 | orchestrator | 2025-02-10 09:58:25.458763 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-02-10 09:58:25.458778 | orchestrator | Monday 10 February 2025 09:58:16 +0000 (0:00:00.116) 0:00:08.050 ******* 2025-02-10 09:58:25.458793 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:58:25.458807 | orchestrator | 2025-02-10 09:58:25.458849 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-02-10 09:58:25.458863 | orchestrator | Monday 10 February 2025 09:58:18 +0000 (0:00:01.759) 0:00:09.810 ******* 2025-02-10 09:58:25.458877 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:58:25.458891 | orchestrator | 2025-02-10 09:58:25.458904 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-02-10 09:58:25.458918 | orchestrator | Monday 10 February 2025 09:58:18 +0000 (0:00:00.288) 0:00:10.098 ******* 2025-02-10 09:58:25.458932 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:58:25.458945 | orchestrator | 2025-02-10 09:58:25.458959 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-02-10 09:58:25.458973 | orchestrator | Monday 10 February 2025 09:58:19 +0000 (0:00:00.489) 0:00:10.587 ******* 2025-02-10 09:58:25.458986 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:58:25.459000 | orchestrator | 2025-02-10 09:58:25.459014 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-02-10 09:58:25.459027 | orchestrator | Monday 10 February 2025 09:58:19 +0000 (0:00:00.151) 0:00:10.739 ******* 2025-02-10 09:58:25.459041 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:58:25.459054 | orchestrator | 2025-02-10 09:58:25.459068 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-02-10 09:58:25.459083 | orchestrator | Monday 10 February 2025 09:58:19 +0000 (0:00:00.169) 0:00:10.908 ******* 2025-02-10 09:58:25.459098 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-02-10 09:58:25.459113 | orchestrator | 2025-02-10 09:58:25.459128 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-02-10 09:58:25.459143 | orchestrator | Monday 10 February 2025 09:58:19 +0000 (0:00:00.316) 0:00:11.225 ******* 2025-02-10 09:58:25.459158 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:58:25.459173 | orchestrator | 2025-02-10 09:58:25.459189 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-02-10 09:58:25.459205 | orchestrator | Monday 10 February 2025 09:58:20 +0000 (0:00:00.294) 0:00:11.519 ******* 2025-02-10 09:58:25.459220 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-02-10 09:58:25.459235 | orchestrator | 2025-02-10 09:58:25.459250 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-02-10 09:58:25.459266 | orchestrator | Monday 10 February 2025 09:58:21 +0000 (0:00:01.510) 0:00:13.029 ******* 2025-02-10 09:58:25.459281 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-02-10 09:58:25.459296 | orchestrator | 2025-02-10 09:58:25.459312 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-02-10 09:58:25.459330 | orchestrator | Monday 10 February 2025 09:58:22 +0000 (0:00:00.280) 0:00:13.310 ******* 2025-02-10 09:58:25.459345 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-02-10 09:58:25.459361 | orchestrator | 2025-02-10 09:58:25.459376 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-02-10 09:58:25.459405 | orchestrator | Monday 10 February 2025 09:58:22 +0000 (0:00:00.294) 0:00:13.605 ******* 2025-02-10 09:58:25.459421 | orchestrator | 2025-02-10 09:58:25.459437 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-02-10 09:58:25.459453 | orchestrator | Monday 10 February 2025 09:58:22 +0000 (0:00:00.096) 0:00:13.702 ******* 2025-02-10 09:58:25.459466 | orchestrator | 2025-02-10 09:58:25.459480 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-02-10 09:58:25.459493 | orchestrator | Monday 10 February 2025 09:58:22 +0000 (0:00:00.079) 0:00:13.782 ******* 2025-02-10 09:58:25.459507 | orchestrator | 2025-02-10 09:58:25.459521 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-02-10 09:58:25.459534 | orchestrator | Monday 10 February 2025 09:58:22 +0000 (0:00:00.075) 0:00:13.857 ******* 2025-02-10 09:58:25.459548 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-02-10 09:58:25.459561 | orchestrator | 2025-02-10 09:58:25.459575 | orchestrator | TASK [Print report file information] ******************************************* 2025-02-10 09:58:25.459598 | orchestrator | Monday 10 February 2025 09:58:24 +0000 (0:00:01.957) 0:00:15.814 ******* 2025-02-10 09:58:25.459612 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-02-10 09:58:25.459626 | orchestrator |  "msg": [ 2025-02-10 09:58:25.459640 | orchestrator |  "Validator run completed.", 2025-02-10 09:58:25.459654 | orchestrator |  "You can find the report file here:", 2025-02-10 09:58:25.459667 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-02-10T09:58:09+00:00-report.json", 2025-02-10 09:58:25.459682 | orchestrator |  "on the following host:", 2025-02-10 09:58:25.459720 | orchestrator |  "testbed-manager" 2025-02-10 09:58:25.459746 | orchestrator |  ] 2025-02-10 09:58:25.459760 | orchestrator | } 2025-02-10 09:58:25.459774 | orchestrator | 2025-02-10 09:58:25.459788 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:58:25.459803 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-02-10 09:58:25.459819 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:58:25.459852 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:58:25.745323 | orchestrator | 2025-02-10 09:58:25.745452 | orchestrator | Monday 10 February 2025 09:58:24 +0000 (0:00:00.435) 0:00:16.250 ******* 2025-02-10 09:58:25.745472 | orchestrator | =============================================================================== 2025-02-10 09:58:25.745487 | orchestrator | Write report file ------------------------------------------------------- 1.96s 2025-02-10 09:58:25.745501 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.76s 2025-02-10 09:58:25.745515 | orchestrator | Aggregate test results step one ----------------------------------------- 1.51s 2025-02-10 09:58:25.745528 | orchestrator | Get container info ------------------------------------------------------ 1.17s 2025-02-10 09:58:25.745542 | orchestrator | Create report output directory ------------------------------------------ 0.98s 2025-02-10 09:58:25.745556 | orchestrator | Aggregate test results step one ----------------------------------------- 0.77s 2025-02-10 09:58:25.745570 | orchestrator | Get timestamp for report file ------------------------------------------- 0.70s 2025-02-10 09:58:25.745584 | orchestrator | Set test result to passed if container is existing ---------------------- 0.57s 2025-02-10 09:58:25.745597 | orchestrator | Prepare test data for container existance test -------------------------- 0.50s 2025-02-10 09:58:25.745611 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.49s 2025-02-10 09:58:25.745625 | orchestrator | Print report file information ------------------------------------------- 0.44s 2025-02-10 09:58:25.745638 | orchestrator | Prepare test data ------------------------------------------------------- 0.36s 2025-02-10 09:58:25.745652 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.34s 2025-02-10 09:58:25.745666 | orchestrator | Set test result to failed if container is missing ----------------------- 0.34s 2025-02-10 09:58:25.745680 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.32s 2025-02-10 09:58:25.745752 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.32s 2025-02-10 09:58:25.745766 | orchestrator | Aggregate test results step three --------------------------------------- 0.29s 2025-02-10 09:58:25.745780 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.29s 2025-02-10 09:58:25.745794 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.29s 2025-02-10 09:58:25.745807 | orchestrator | Aggregate test results step two ----------------------------------------- 0.29s 2025-02-10 09:58:25.745859 | orchestrator | + osism validate ceph-osds 2025-02-10 09:58:37.120810 | orchestrator | 2025-02-10 09:58:37.120969 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-02-10 09:58:37.121016 | orchestrator | 2025-02-10 09:58:37.121032 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-02-10 09:58:37.121046 | orchestrator | Monday 10 February 2025 09:58:31 +0000 (0:00:00.404) 0:00:00.404 ******* 2025-02-10 09:58:37.121061 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-02-10 09:58:37.121075 | orchestrator | 2025-02-10 09:58:37.121089 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-02-10 09:58:37.121103 | orchestrator | Monday 10 February 2025 09:58:33 +0000 (0:00:01.764) 0:00:02.169 ******* 2025-02-10 09:58:37.121117 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-02-10 09:58:37.121130 | orchestrator | 2025-02-10 09:58:37.121144 | orchestrator | TASK [Create report output directory] ****************************************** 2025-02-10 09:58:37.121158 | orchestrator | Monday 10 February 2025 09:58:33 +0000 (0:00:00.244) 0:00:02.413 ******* 2025-02-10 09:58:37.121172 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-02-10 09:58:37.121186 | orchestrator | 2025-02-10 09:58:37.121199 | orchestrator | TASK [Define report vars] ****************************************************** 2025-02-10 09:58:37.121213 | orchestrator | Monday 10 February 2025 09:58:34 +0000 (0:00:00.948) 0:00:03.362 ******* 2025-02-10 09:58:37.121227 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:58:37.121242 | orchestrator | 2025-02-10 09:58:37.121257 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-02-10 09:58:37.121273 | orchestrator | Monday 10 February 2025 09:58:34 +0000 (0:00:00.291) 0:00:03.654 ******* 2025-02-10 09:58:37.121288 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:58:37.121304 | orchestrator | 2025-02-10 09:58:37.121319 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-02-10 09:58:37.121334 | orchestrator | Monday 10 February 2025 09:58:34 +0000 (0:00:00.166) 0:00:03.820 ******* 2025-02-10 09:58:37.121349 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:58:37.121364 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:58:37.121380 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:58:37.121395 | orchestrator | 2025-02-10 09:58:37.121408 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-02-10 09:58:37.121422 | orchestrator | Monday 10 February 2025 09:58:35 +0000 (0:00:00.362) 0:00:04.182 ******* 2025-02-10 09:58:37.121435 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:58:37.121449 | orchestrator | 2025-02-10 09:58:37.121463 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-02-10 09:58:37.121477 | orchestrator | Monday 10 February 2025 09:58:35 +0000 (0:00:00.166) 0:00:04.349 ******* 2025-02-10 09:58:37.121490 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:58:37.121504 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:58:37.121517 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:58:37.121531 | orchestrator | 2025-02-10 09:58:37.121545 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-02-10 09:58:37.121558 | orchestrator | Monday 10 February 2025 09:58:35 +0000 (0:00:00.405) 0:00:04.755 ******* 2025-02-10 09:58:37.121572 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:58:37.121586 | orchestrator | 2025-02-10 09:58:37.121599 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-02-10 09:58:37.121613 | orchestrator | Monday 10 February 2025 09:58:36 +0000 (0:00:00.595) 0:00:05.351 ******* 2025-02-10 09:58:37.121627 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:58:37.121658 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:58:37.121672 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:58:37.121720 | orchestrator | 2025-02-10 09:58:37.121735 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-02-10 09:58:37.121749 | orchestrator | Monday 10 February 2025 09:58:36 +0000 (0:00:00.566) 0:00:05.918 ******* 2025-02-10 09:58:37.121765 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'df51186a49bc05964f3398bf117a34ece18d75e0c904ff617a036690927b83a8', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-02-10 09:58:37.121793 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd828e1a692af2d758581b1c16384283c53298bbea780f98da8d2fda0cd505e4f', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-02-10 09:58:37.121813 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b8db7ea7d36187abbd7011e6206b565835b0e6203c663af9534434744ae41431', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-02-10 09:58:37.121828 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'eba2b5e8b9ae1b8667798410848bd009181faf576b96da2ede89a9f540187a94', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2025-02-10 09:58:37.121857 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e483db3d95063dfc15778d26f27dfd4c9437e0724fae42f3d94af650996c528f', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-02-10 09:58:37.121886 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a7c579334f87fbfb278d69293f2372dd514cb66de9ae558ce5d6950566ddfe1f', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-02-10 09:58:37.290429 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b3b9fa51aaec767bafde067514edcb5f3c011fa4f07f6af98117ec4573318400', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 11 minutes'})  2025-02-10 09:58:37.290558 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6ace435906c6425a92d662956f4df611518fe75a17696acca8a90be5f1f7d753', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-02-10 09:58:37.290576 | orchestrator | skipping: [testbed-node-3] => (item={'id': '40a0b9872ea17aacdaace097af6b22e4ee36dbf84fa4d16ea8c492dc32515668', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 15 minutes (healthy)'})  2025-02-10 09:58:37.290592 | orchestrator | skipping: [testbed-node-3] => (item={'id': '583d70b144388ed5fb4235eb27682826f7910f6dca3b7b532e02b18c31307899', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:17.2.7', 'name': '/ceph-rgw-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 24 minutes'})  2025-02-10 09:58:37.290606 | orchestrator | skipping: [testbed-node-3] => (item={'id': '840b9147b68c7612564988c92cb9bca2e2b9d52a76b4e5861da354b7797f9dcb', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:17.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 26 minutes'})  2025-02-10 09:58:37.290619 | orchestrator | skipping: [testbed-node-3] => (item={'id': '36825f3d5ca13d7dcbac04462183aeace7f78ec05473b08991c03abbf8e2b87f', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:17.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 27 minutes'})  2025-02-10 09:58:37.290633 | orchestrator | ok: [testbed-node-3] => (item={'id': '51ea3ec1ce933fa15b42ae5dd1c8495be77a92e0de61bbe7e3f7583b6dd82f72', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:17.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 29 minutes'}) 2025-02-10 09:58:37.290646 | orchestrator | ok: [testbed-node-3] => (item={'id': '308d362848e379f122515b54370d582fdfee3fd557a0bbb46e081aa187d18a0b', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:17.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 29 minutes'}) 2025-02-10 09:58:37.290756 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ae876f5fc9cde7fe72761e272fb85686b0fa8b4dbd482de63a870c9742ec8a0e', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-controller:24.3.4.20241206', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 33 minutes'})  2025-02-10 09:58:37.290778 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b47d615cfba6a70e3381be3e90775d5855fce5109782c6b309986d6f73e42877', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 35 minutes (healthy)'})  2025-02-10 09:58:37.290791 | orchestrator | skipping: [testbed-node-3] => (item={'id': '57486b9fca4132cb563e3164c325e14fc089cb200f6447efde2380cb3f1e0273', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-db-server:3.3.0.20241206', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 35 minutes (healthy)'})  2025-02-10 09:58:37.290805 | orchestrator | skipping: [testbed-node-3] => (item={'id': '320477803d7ba0ad0ec20800473a8c631b3d32c45815f728e662dfd1c2542f06', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'name': '/cron', 'state': 'running', 'status': 'Up 36 minutes'})  2025-02-10 09:58:37.290818 | orchestrator | skipping: [testbed-node-3] => (item={'id': '891a06c6a9398bc05be89fde93b4e87fcab27e60cd510f86591e974f8a57b846', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 36 minutes'})  2025-02-10 09:58:37.290850 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b4c932dd4026d95dfa6666c8bb2397198d7e52a45fddefff1ff31bb4c4cc4438', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'name': '/fluentd', 'state': 'running', 'status': 'Up 37 minutes'})  2025-02-10 09:58:37.290863 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4d182f0f6a39812260a5014bc5af3adb1f41a55f29ed54d9612d680324457d60', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-02-10 09:58:37.290876 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e1a48f413791a871c49232c6e52ad09d40385a14af85e2f12bcc209729a8f520', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-02-10 09:58:37.290888 | orchestrator | skipping: [testbed-node-4] => (item={'id': '568373e5f1387128b756f4e3c3030664a6ca4cabe1d0a0fd8fe25f906e10b44d', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-02-10 09:58:37.290901 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0bbd7da5a5ff2fb33c889ef709c65434a7a597806965edfc226e262a50c7bc35', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2025-02-10 09:58:37.290914 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd7e88b131c43305820df13175f24cf48682562594ff3b213d6db872134f92129', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-02-10 09:58:37.290926 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ccc02604aaa5cb1139877ee7b66185e60e707e834ebfd3043bca9b4f62185c82', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-02-10 09:58:37.290939 | orchestrator | skipping: [testbed-node-4] => (item={'id': '466eae0e5f9faf5ca6c0c6452f4eb891ee74d8ba19b2ada2e7807382eb051a8c', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 11 minutes'})  2025-02-10 09:58:37.290973 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9425a3ba502e6960406654de742fee77f4985822db4de6f1882b9033f60e494d', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-02-10 09:58:37.290996 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'dca71a73300eb01c261baea77a7bf7969a5b93492d10fcbeebd112f072bb5460', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 15 minutes (healthy)'})  2025-02-10 09:58:37.291011 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a396bfc9c5a2b11bd23ffbb0649ca46687bc0d8fb5635d0f2841b45376855508', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:17.2.7', 'name': '/ceph-rgw-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 24 minutes'})  2025-02-10 09:58:37.291026 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3c28fb40b14b32461f0c46912f3473ecb4519c77b1bfb63c2e544f30c33272d3', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:17.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 26 minutes'})  2025-02-10 09:58:37.291040 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6e81b506a4ff108cda86203d2f9885548ab0444a91fb34c68d5e5f8a158f8dac', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:17.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 27 minutes'})  2025-02-10 09:58:37.291059 | orchestrator | ok: [testbed-node-4] => (item={'id': '7a58ddc9ad453c4bd578b5aebf4d2e8381e5752e0a2372091ae4327687143bd9', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:17.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 29 minutes'}) 2025-02-10 09:58:37.291084 | orchestrator | ok: [testbed-node-4] => (item={'id': 'd20a3f4cb43fc8e3c2605bd5f6183c69f7a0a249d4f3f24925ada5d1287e35c2', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:17.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 29 minutes'}) 2025-02-10 09:58:42.998763 | orchestrator | skipping: [testbed-node-4] => (item={'id': '97c79e0e8e5972e3a06bbd55f15dd40e21d3c472527484cdb1b944cb8d728368', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-controller:24.3.4.20241206', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 33 minutes'})  2025-02-10 09:58:42.998943 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a35a26e8272e1ebe21024dba31bab9db9877b90f700610f65c5b471fc6c7ba3f', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 35 minutes (healthy)'})  2025-02-10 09:58:42.998982 | orchestrator | skipping: [testbed-node-4] => (item={'id': '912f72fb8f18fb6a0b9043ea791e1ede300768ab6c25b410b7aa9d8f2066f876', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-db-server:3.3.0.20241206', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 35 minutes (healthy)'})  2025-02-10 09:58:42.999007 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8c1bdfad650dfdbdf76e953ad908acd820676f070def102a0bb53790d191db21', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'name': '/cron', 'state': 'running', 'status': 'Up 36 minutes'})  2025-02-10 09:58:42.999032 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7d3690946dbed15c4fcf83f7d4227ac163aceb4b8206fae84ca97fae0d33d1a2', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 36 minutes'})  2025-02-10 09:58:42.999057 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c8c5004e5b41313e95d9b9e0f1226eac3eb4574d03550412df000b5f7a559103', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'name': '/fluentd', 'state': 'running', 'status': 'Up 37 minutes'})  2025-02-10 09:58:42.999115 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b780fed2e7aabf6095a19d4afc7bdf18fbacc622d7c10287d79322da97c6afea', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-compute:29.2.1.20241206', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-02-10 09:58:42.999131 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6f3c922ae4c967e3e96284fa91066151c1b00f51707f3dfedc4226d32546ad05', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-libvirt:8.0.0.20241206', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-02-10 09:58:42.999162 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3f27541d462c26d1cc1e6641727309e6f956ce4c9396fdfe2dcdfb323d981dff', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/nova-ssh:29.2.1.20241206', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-02-10 09:58:42.999177 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fce82cb4bd6a1cd7855a984a05c349894b3f8918ec6cd6367d803dbaaa368a6b', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2025-02-10 09:58:42.999193 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b807c6fd22fe3ead8831be124bfc0268a9b3f386282ff25a8a2bcec892041c87', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-backup:24.2.1.20241206', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-02-10 09:58:42.999207 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5b6a95ba47ec254e6c547f850432f7648f1425547b663f1545a9adb59ca9c908', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cinder-volume:24.2.1.20241206', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-02-10 09:58:42.999221 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f8f8229a54edcbb4390143e5dc41fab99b342410bc15148e4c9b20fdbbd6b9bf', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 11 minutes'})  2025-02-10 09:58:42.999256 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'bcbb948cba2a7ebdd60e940ba0bbbdb64514dad40816c4109059181190adf864', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-02-10 09:58:42.999274 | orchestrator | skipping: [testbed-node-5] => (item={'id': '70d0344672011efd3931e0dc4b0f61c5ff16ba207eb966f056e9ff6b71dac7cd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 15 minutes (healthy)'})  2025-02-10 09:58:42.999290 | orchestrator | skipping: [testbed-node-5] => (item={'id': '46b34c04b76a8094a42fd59dbbba3af98b899c53b32616aa7e881bfaa20d574b', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:17.2.7', 'name': '/ceph-rgw-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 24 minutes'})  2025-02-10 09:58:42.999305 | orchestrator | skipping: [testbed-node-5] => (item={'id': '145e39e521cf31321a7979b1fa519ff0c764ca6ecc06cbb117294f4665e769f1', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:17.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 26 minutes'})  2025-02-10 09:58:42.999321 | orchestrator | skipping: [testbed-node-5] => (item={'id': '774db2ac1dce83a42d27aefd9c2393bf41a4db4a5a25291599128be67c9f5fa7', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:17.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 27 minutes'})  2025-02-10 09:58:42.999338 | orchestrator | ok: [testbed-node-5] => (item={'id': '5b07d040c738576936400e45cf5734e8a28f5d0d8bfd369f7be08096991064bf', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:17.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 29 minutes'}) 2025-02-10 09:58:42.999365 | orchestrator | ok: [testbed-node-5] => (item={'id': 'fc8f79f53fdc82b632ada4b4cd091cad665de6a457d0654bee958d9d4e2cf86e', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:17.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 29 minutes'}) 2025-02-10 09:58:42.999390 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8a2f3ec5ab21cce12e9ed6fc0fcb1b92a5e369758c20d4209cd94ae562cfe6f6', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/ovn-controller:24.3.4.20241206', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 33 minutes'})  2025-02-10 09:58:42.999415 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'dfb1942b08b029d01cb111e11cd8c5771c90ceb4e36d399082864fa334402d87', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 35 minutes (healthy)'})  2025-02-10 09:58:42.999440 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0a55318f767ff4d97067119c5d658c16dd8a074d75ee973a05b5be83c7909267', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/openvswitch-db-server:3.3.0.20241206', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 35 minutes (healthy)'})  2025-02-10 09:58:42.999473 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd5d44b27c0b8e154b637a799f3e987e6f6002bbe89e936320923ef6f587aa15d', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/cron:3.0.20241206', 'name': '/cron', 'state': 'running', 'status': 'Up 36 minutes'})  2025-02-10 09:58:42.999499 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0763b75c471f9808504a1d37ada17cf018290ac95a04660352ebfd20d391a17f', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/kolla-toolbox:18.3.0.20241206', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 36 minutes'})  2025-02-10 09:58:42.999525 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fc4f215b878c0a5d55248e34610b4b291ddf3663d1fc77780e18f81accb447be', 'image': 'nexus.testbed.osism.xyz:8193/kolla/release/fluentd:5.0.5.20241206', 'name': '/fluentd', 'state': 'running', 'status': 'Up 37 minutes'})  2025-02-10 09:58:42.999547 | orchestrator | 2025-02-10 09:58:42.999566 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-02-10 09:58:42.999583 | orchestrator | Monday 10 February 2025 09:58:37 +0000 (0:00:00.566) 0:00:06.484 ******* 2025-02-10 09:58:42.999599 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:58:42.999617 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:58:42.999633 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:58:42.999654 | orchestrator | 2025-02-10 09:58:42.999706 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-02-10 09:58:42.999731 | orchestrator | Monday 10 February 2025 09:58:37 +0000 (0:00:00.300) 0:00:06.784 ******* 2025-02-10 09:58:42.999771 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:58:58.985072 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:58:58.985233 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:58:58.985303 | orchestrator | 2025-02-10 09:58:58.985323 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-02-10 09:58:58.985339 | orchestrator | Monday 10 February 2025 09:58:37 +0000 (0:00:00.308) 0:00:07.093 ******* 2025-02-10 09:58:58.985353 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:58:58.985368 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:58:58.985383 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:58:58.985398 | orchestrator | 2025-02-10 09:58:58.985412 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-02-10 09:58:58.985427 | orchestrator | Monday 10 February 2025 09:58:38 +0000 (0:00:00.593) 0:00:07.687 ******* 2025-02-10 09:58:58.985441 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:58:58.985455 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:58:58.985469 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:58:58.985483 | orchestrator | 2025-02-10 09:58:58.985498 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-02-10 09:58:58.985581 | orchestrator | Monday 10 February 2025 09:58:38 +0000 (0:00:00.317) 0:00:08.005 ******* 2025-02-10 09:58:58.985611 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-02-10 09:58:58.985636 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-02-10 09:58:58.985652 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:58:58.985728 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-02-10 09:58:58.985745 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-02-10 09:58:58.985761 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:58:58.985778 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-02-10 09:58:58.985793 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-02-10 09:58:58.985808 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:58:58.985824 | orchestrator | 2025-02-10 09:58:58.985839 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-02-10 09:58:58.985855 | orchestrator | Monday 10 February 2025 09:58:39 +0000 (0:00:00.349) 0:00:08.354 ******* 2025-02-10 09:58:58.985871 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:58:58.985887 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:58:58.985901 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:58:58.985915 | orchestrator | 2025-02-10 09:58:58.985928 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-02-10 09:58:58.985942 | orchestrator | Monday 10 February 2025 09:58:39 +0000 (0:00:00.563) 0:00:08.918 ******* 2025-02-10 09:58:58.985956 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:58:58.985970 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:58:58.985983 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:58:58.985997 | orchestrator | 2025-02-10 09:58:58.986011 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-02-10 09:58:58.986083 | orchestrator | Monday 10 February 2025 09:58:40 +0000 (0:00:00.359) 0:00:09.277 ******* 2025-02-10 09:58:58.986098 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:58:58.986112 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:58:58.986126 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:58:58.986140 | orchestrator | 2025-02-10 09:58:58.986154 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-02-10 09:58:58.986168 | orchestrator | Monday 10 February 2025 09:58:40 +0000 (0:00:00.362) 0:00:09.640 ******* 2025-02-10 09:58:58.986182 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:58:58.986195 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:58:58.986209 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:58:58.986223 | orchestrator | 2025-02-10 09:58:58.986237 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-02-10 09:58:58.986251 | orchestrator | Monday 10 February 2025 09:58:40 +0000 (0:00:00.369) 0:00:10.010 ******* 2025-02-10 09:58:58.986264 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:58:58.986278 | orchestrator | 2025-02-10 09:58:58.986292 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-02-10 09:58:58.986306 | orchestrator | Monday 10 February 2025 09:58:41 +0000 (0:00:00.257) 0:00:10.268 ******* 2025-02-10 09:58:58.986319 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:58:58.986333 | orchestrator | 2025-02-10 09:58:58.986347 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-02-10 09:58:58.986361 | orchestrator | Monday 10 February 2025 09:58:41 +0000 (0:00:00.759) 0:00:11.028 ******* 2025-02-10 09:58:58.986375 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:58:58.986389 | orchestrator | 2025-02-10 09:58:58.986402 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-02-10 09:58:58.986416 | orchestrator | Monday 10 February 2025 09:58:42 +0000 (0:00:00.268) 0:00:11.296 ******* 2025-02-10 09:58:58.986442 | orchestrator | 2025-02-10 09:58:58.986456 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-02-10 09:58:58.986470 | orchestrator | Monday 10 February 2025 09:58:42 +0000 (0:00:00.069) 0:00:11.365 ******* 2025-02-10 09:58:58.986484 | orchestrator | 2025-02-10 09:58:58.986498 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-02-10 09:58:58.986512 | orchestrator | Monday 10 February 2025 09:58:42 +0000 (0:00:00.076) 0:00:11.442 ******* 2025-02-10 09:58:58.986526 | orchestrator | 2025-02-10 09:58:58.986539 | orchestrator | TASK [Print report file information] ******************************************* 2025-02-10 09:58:58.986553 | orchestrator | Monday 10 February 2025 09:58:42 +0000 (0:00:00.073) 0:00:11.516 ******* 2025-02-10 09:58:58.986567 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:58:58.986581 | orchestrator | 2025-02-10 09:58:58.986594 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-02-10 09:58:58.986608 | orchestrator | Monday 10 February 2025 09:58:42 +0000 (0:00:00.320) 0:00:11.836 ******* 2025-02-10 09:58:58.986622 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:58:58.986636 | orchestrator | 2025-02-10 09:58:58.986700 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-02-10 09:58:58.986717 | orchestrator | Monday 10 February 2025 09:58:42 +0000 (0:00:00.259) 0:00:12.096 ******* 2025-02-10 09:58:58.986731 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:58:58.986745 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:58:58.986759 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:58:58.986773 | orchestrator | 2025-02-10 09:58:58.986787 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-02-10 09:58:58.986801 | orchestrator | Monday 10 February 2025 09:58:43 +0000 (0:00:00.327) 0:00:12.423 ******* 2025-02-10 09:58:58.986814 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:58:58.986828 | orchestrator | 2025-02-10 09:58:58.986842 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-02-10 09:58:58.986856 | orchestrator | Monday 10 February 2025 09:58:43 +0000 (0:00:00.260) 0:00:12.684 ******* 2025-02-10 09:58:58.986869 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-02-10 09:58:58.986883 | orchestrator | 2025-02-10 09:58:58.986904 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-02-10 09:58:58.986918 | orchestrator | Monday 10 February 2025 09:58:45 +0000 (0:00:02.247) 0:00:14.931 ******* 2025-02-10 09:58:58.986932 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:58:58.986946 | orchestrator | 2025-02-10 09:58:58.986960 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-02-10 09:58:58.986974 | orchestrator | Monday 10 February 2025 09:58:45 +0000 (0:00:00.160) 0:00:15.092 ******* 2025-02-10 09:58:58.986988 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:58:58.987001 | orchestrator | 2025-02-10 09:58:58.987015 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-02-10 09:58:58.987029 | orchestrator | Monday 10 February 2025 09:58:46 +0000 (0:00:00.223) 0:00:15.315 ******* 2025-02-10 09:58:58.987043 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:58:58.987057 | orchestrator | 2025-02-10 09:58:58.987071 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-02-10 09:58:58.987084 | orchestrator | Monday 10 February 2025 09:58:46 +0000 (0:00:00.141) 0:00:15.456 ******* 2025-02-10 09:58:58.987098 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:58:58.987112 | orchestrator | 2025-02-10 09:58:58.987126 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-02-10 09:58:58.987140 | orchestrator | Monday 10 February 2025 09:58:46 +0000 (0:00:00.144) 0:00:15.601 ******* 2025-02-10 09:58:58.987153 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:58:58.987167 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:58:58.987181 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:58:58.987195 | orchestrator | 2025-02-10 09:58:58.987209 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-02-10 09:58:58.987230 | orchestrator | Monday 10 February 2025 09:58:46 +0000 (0:00:00.354) 0:00:15.955 ******* 2025-02-10 09:58:58.987244 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:58:58.987258 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:58:58.987272 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:58:58.987285 | orchestrator | 2025-02-10 09:58:58.987299 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-02-10 09:58:58.987313 | orchestrator | Monday 10 February 2025 09:58:48 +0000 (0:00:01.457) 0:00:17.413 ******* 2025-02-10 09:58:58.987327 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:58:58.987340 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:58:58.987354 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:58:58.987373 | orchestrator | 2025-02-10 09:58:58.987387 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-02-10 09:58:58.987401 | orchestrator | Monday 10 February 2025 09:58:48 +0000 (0:00:00.615) 0:00:18.029 ******* 2025-02-10 09:58:58.987414 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:58:58.987428 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:58:58.987442 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:58:58.987456 | orchestrator | 2025-02-10 09:58:58.987470 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-02-10 09:58:58.987484 | orchestrator | Monday 10 February 2025 09:58:49 +0000 (0:00:00.457) 0:00:18.487 ******* 2025-02-10 09:58:58.987498 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:58:58.987512 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:58:58.987525 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:58:58.987539 | orchestrator | 2025-02-10 09:58:58.987553 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-02-10 09:58:58.987566 | orchestrator | Monday 10 February 2025 09:58:49 +0000 (0:00:00.305) 0:00:18.793 ******* 2025-02-10 09:58:58.987580 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:58:58.987594 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:58:58.987608 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:58:58.987622 | orchestrator | 2025-02-10 09:58:58.987636 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-02-10 09:58:58.987650 | orchestrator | Monday 10 February 2025 09:58:50 +0000 (0:00:00.656) 0:00:19.449 ******* 2025-02-10 09:58:58.987680 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:58:58.987695 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:58:58.987709 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:58:58.987723 | orchestrator | 2025-02-10 09:58:58.987737 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-02-10 09:58:58.987751 | orchestrator | Monday 10 February 2025 09:58:50 +0000 (0:00:00.359) 0:00:19.809 ******* 2025-02-10 09:58:58.987765 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:58:58.987779 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:58:58.987792 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:58:58.987806 | orchestrator | 2025-02-10 09:58:58.987820 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-02-10 09:58:58.987834 | orchestrator | Monday 10 February 2025 09:58:51 +0000 (0:00:00.341) 0:00:20.150 ******* 2025-02-10 09:58:58.987847 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:58:58.987861 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:58:58.987875 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:58:58.987907 | orchestrator | 2025-02-10 09:58:58.987923 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-02-10 09:58:58.987938 | orchestrator | Monday 10 February 2025 09:58:51 +0000 (0:00:00.468) 0:00:20.619 ******* 2025-02-10 09:58:58.987952 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:58:58.987976 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:58:59.684579 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:58:59.684996 | orchestrator | 2025-02-10 09:58:59.685049 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-02-10 09:58:59.685077 | orchestrator | Monday 10 February 2025 09:58:52 +0000 (0:00:00.689) 0:00:21.309 ******* 2025-02-10 09:58:59.685137 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:58:59.685166 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:58:59.685194 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:58:59.685217 | orchestrator | 2025-02-10 09:58:59.685240 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-02-10 09:58:59.685263 | orchestrator | Monday 10 February 2025 09:58:52 +0000 (0:00:00.378) 0:00:21.687 ******* 2025-02-10 09:58:59.685287 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:58:59.685311 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:58:59.685335 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:58:59.685361 | orchestrator | 2025-02-10 09:58:59.685401 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-02-10 09:58:59.685426 | orchestrator | Monday 10 February 2025 09:58:52 +0000 (0:00:00.339) 0:00:22.027 ******* 2025-02-10 09:58:59.685449 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:58:59.685472 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:58:59.685495 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:58:59.685517 | orchestrator | 2025-02-10 09:58:59.685539 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-02-10 09:58:59.685562 | orchestrator | Monday 10 February 2025 09:58:53 +0000 (0:00:00.376) 0:00:22.404 ******* 2025-02-10 09:58:59.685586 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-02-10 09:58:59.685609 | orchestrator | 2025-02-10 09:58:59.685633 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-02-10 09:58:59.685657 | orchestrator | Monday 10 February 2025 09:58:54 +0000 (0:00:00.816) 0:00:23.220 ******* 2025-02-10 09:58:59.685718 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:58:59.685743 | orchestrator | 2025-02-10 09:58:59.685769 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-02-10 09:58:59.685794 | orchestrator | Monday 10 February 2025 09:58:54 +0000 (0:00:00.259) 0:00:23.479 ******* 2025-02-10 09:58:59.685818 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-02-10 09:58:59.685843 | orchestrator | 2025-02-10 09:58:59.685868 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-02-10 09:58:59.685891 | orchestrator | Monday 10 February 2025 09:58:56 +0000 (0:00:01.838) 0:00:25.318 ******* 2025-02-10 09:58:59.685914 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-02-10 09:58:59.685939 | orchestrator | 2025-02-10 09:58:59.686719 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-02-10 09:58:59.686789 | orchestrator | Monday 10 February 2025 09:58:56 +0000 (0:00:00.278) 0:00:25.596 ******* 2025-02-10 09:58:59.686816 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-02-10 09:58:59.686841 | orchestrator | 2025-02-10 09:58:59.686864 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-02-10 09:58:59.686887 | orchestrator | Monday 10 February 2025 09:58:56 +0000 (0:00:00.281) 0:00:25.877 ******* 2025-02-10 09:58:59.686911 | orchestrator | 2025-02-10 09:58:59.687886 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-02-10 09:58:59.687928 | orchestrator | Monday 10 February 2025 09:58:56 +0000 (0:00:00.105) 0:00:25.982 ******* 2025-02-10 09:58:59.687951 | orchestrator | 2025-02-10 09:58:59.687975 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-02-10 09:58:59.687997 | orchestrator | Monday 10 February 2025 09:58:56 +0000 (0:00:00.073) 0:00:26.055 ******* 2025-02-10 09:58:59.688018 | orchestrator | 2025-02-10 09:58:59.688038 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-02-10 09:58:59.688058 | orchestrator | Monday 10 February 2025 09:58:57 +0000 (0:00:00.083) 0:00:26.138 ******* 2025-02-10 09:58:59.688078 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-02-10 09:58:59.688098 | orchestrator | 2025-02-10 09:58:59.688118 | orchestrator | TASK [Print report file information] ******************************************* 2025-02-10 09:58:59.688139 | orchestrator | Monday 10 February 2025 09:58:58 +0000 (0:00:01.476) 0:00:27.615 ******* 2025-02-10 09:58:59.688185 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-02-10 09:58:59.688207 | orchestrator |  "msg": [ 2025-02-10 09:58:59.688229 | orchestrator |  "Validator run completed.", 2025-02-10 09:58:59.688251 | orchestrator |  "You can find the report file here:", 2025-02-10 09:58:59.688274 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-02-10T09:58:31+00:00-report.json", 2025-02-10 09:58:59.688297 | orchestrator |  "on the following host:", 2025-02-10 09:58:59.688318 | orchestrator |  "testbed-manager" 2025-02-10 09:58:59.688339 | orchestrator |  ] 2025-02-10 09:58:59.688360 | orchestrator | } 2025-02-10 09:58:59.688382 | orchestrator | 2025-02-10 09:58:59.688402 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:58:59.688426 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-02-10 09:58:59.688448 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-02-10 09:58:59.688470 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-02-10 09:58:59.688489 | orchestrator | 2025-02-10 09:58:59.688509 | orchestrator | Monday 10 February 2025 09:58:58 +0000 (0:00:00.456) 0:00:28.071 ******* 2025-02-10 09:58:59.688530 | orchestrator | =============================================================================== 2025-02-10 09:58:59.688551 | orchestrator | Get ceph osd tree ------------------------------------------------------- 2.25s 2025-02-10 09:58:59.688593 | orchestrator | Aggregate test results step one ----------------------------------------- 1.84s 2025-02-10 09:58:59.959275 | orchestrator | Get timestamp for report file ------------------------------------------- 1.76s 2025-02-10 09:58:59.959395 | orchestrator | Write report file ------------------------------------------------------- 1.48s 2025-02-10 09:58:59.959413 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 1.46s 2025-02-10 09:58:59.959426 | orchestrator | Create report output directory ------------------------------------------ 0.95s 2025-02-10 09:58:59.959439 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.82s 2025-02-10 09:58:59.959471 | orchestrator | Aggregate test results step two ----------------------------------------- 0.76s 2025-02-10 09:58:59.959486 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.69s 2025-02-10 09:58:59.959500 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.66s 2025-02-10 09:58:59.959513 | orchestrator | Parse LVM data as JSON -------------------------------------------------- 0.62s 2025-02-10 09:58:59.959525 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.60s 2025-02-10 09:58:59.959537 | orchestrator | Set test result to passed if count matches ------------------------------ 0.59s 2025-02-10 09:58:59.959552 | orchestrator | Prepare test data ------------------------------------------------------- 0.57s 2025-02-10 09:58:59.959573 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.57s 2025-02-10 09:58:59.959594 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.56s 2025-02-10 09:58:59.959613 | orchestrator | Prepare test data ------------------------------------------------------- 0.47s 2025-02-10 09:58:59.959633 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.46s 2025-02-10 09:58:59.959653 | orchestrator | Print report file information ------------------------------------------- 0.46s 2025-02-10 09:58:59.959699 | orchestrator | Calculate OSD devices for each host ------------------------------------- 0.41s 2025-02-10 09:58:59.959740 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-02-10 09:58:59.964630 | orchestrator | + set -e 2025-02-10 09:58:59.987160 | orchestrator | + source /opt/manager-vars.sh 2025-02-10 09:58:59.987223 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-02-10 09:58:59.987238 | orchestrator | ++ NUMBER_OF_NODES=6 2025-02-10 09:58:59.987281 | orchestrator | ++ export CEPH_VERSION=quincy 2025-02-10 09:58:59.987297 | orchestrator | ++ CEPH_VERSION=quincy 2025-02-10 09:58:59.987311 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-02-10 09:58:59.987325 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-02-10 09:58:59.987339 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-02-10 09:58:59.987353 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-02-10 09:58:59.987368 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-02-10 09:58:59.987382 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-02-10 09:58:59.987396 | orchestrator | ++ export ARA=false 2025-02-10 09:58:59.987409 | orchestrator | ++ ARA=false 2025-02-10 09:58:59.987423 | orchestrator | ++ export TEMPEST=false 2025-02-10 09:58:59.987437 | orchestrator | ++ TEMPEST=false 2025-02-10 09:58:59.987451 | orchestrator | ++ export IS_ZUUL=true 2025-02-10 09:58:59.987476 | orchestrator | ++ IS_ZUUL=true 2025-02-10 09:58:59.987492 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.162 2025-02-10 09:58:59.987508 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.162 2025-02-10 09:58:59.987523 | orchestrator | ++ export EXTERNAL_API=false 2025-02-10 09:58:59.987538 | orchestrator | ++ EXTERNAL_API=false 2025-02-10 09:58:59.987554 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-02-10 09:58:59.987569 | orchestrator | ++ IMAGE_USER=ubuntu 2025-02-10 09:58:59.987584 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-02-10 09:58:59.987599 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-02-10 09:58:59.987614 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-02-10 09:58:59.987629 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-02-10 09:58:59.987644 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-02-10 09:58:59.987688 | orchestrator | + source /etc/os-release 2025-02-10 09:58:59.987703 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.1 LTS' 2025-02-10 09:58:59.987718 | orchestrator | ++ NAME=Ubuntu 2025-02-10 09:58:59.987732 | orchestrator | ++ VERSION_ID=24.04 2025-02-10 09:58:59.987747 | orchestrator | ++ VERSION='24.04.1 LTS (Noble Numbat)' 2025-02-10 09:58:59.987763 | orchestrator | ++ VERSION_CODENAME=noble 2025-02-10 09:58:59.987779 | orchestrator | ++ ID=ubuntu 2025-02-10 09:58:59.987793 | orchestrator | ++ ID_LIKE=debian 2025-02-10 09:58:59.987808 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-02-10 09:58:59.987824 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-02-10 09:58:59.987839 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-02-10 09:58:59.987854 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-02-10 09:58:59.987871 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-02-10 09:58:59.987886 | orchestrator | ++ LOGO=ubuntu-logo 2025-02-10 09:58:59.987900 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-02-10 09:58:59.987914 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-02-10 09:58:59.987930 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-02-10 09:58:59.987955 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-02-10 09:59:24.031916 | orchestrator | 2025-02-10 09:59:24.211182 | orchestrator | # Status of Elasticsearch 2025-02-10 09:59:24.211463 | orchestrator | 2025-02-10 09:59:24.211494 | orchestrator | + pushd /opt/configuration/contrib 2025-02-10 09:59:24.211511 | orchestrator | + echo 2025-02-10 09:59:24.211525 | orchestrator | + echo '# Status of Elasticsearch' 2025-02-10 09:59:24.211539 | orchestrator | + echo 2025-02-10 09:59:24.211553 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-02-10 09:59:24.211588 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 8; active_shards: 19; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=8 'active'=19 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-02-10 09:59:24.256159 | orchestrator | 2025-02-10 09:59:24.256280 | orchestrator | # Status of MariaDB 2025-02-10 09:59:24.256298 | orchestrator | 2025-02-10 09:59:24.256313 | orchestrator | + echo 2025-02-10 09:59:24.256327 | orchestrator | + echo '# Status of MariaDB' 2025-02-10 09:59:24.256341 | orchestrator | + echo 2025-02-10 09:59:24.256356 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root -p password -H api-int.testbed.osism.xyz -c 1 2025-02-10 09:59:24.279075 | orchestrator | Reading package lists... 2025-02-10 09:59:24.649846 | orchestrator | Building dependency tree... 2025-02-10 09:59:24.652762 | orchestrator | Reading state information... 2025-02-10 09:59:25.216477 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-02-10 09:59:25.412045 | orchestrator | bc set to manually installed. 2025-02-10 09:59:25.412204 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-02-10 09:59:25.412827 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-02-10 09:59:25.492198 | orchestrator | 2025-02-10 09:59:25.492330 | orchestrator | # Status of Prometheus 2025-02-10 09:59:25.492349 | orchestrator | 2025-02-10 09:59:25.492363 | orchestrator | + echo 2025-02-10 09:59:25.492378 | orchestrator | + echo '# Status of Prometheus' 2025-02-10 09:59:25.492392 | orchestrator | + echo 2025-02-10 09:59:25.492406 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-02-10 09:59:25.492441 | orchestrator | Unauthorized 2025-02-10 09:59:25.498413 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-02-10 09:59:25.567154 | orchestrator | Unauthorized 2025-02-10 09:59:25.570854 | orchestrator | 2025-02-10 09:59:26.082000 | orchestrator | # Status of RabbitMQ 2025-02-10 09:59:26.082134 | orchestrator | 2025-02-10 09:59:26.082146 | orchestrator | + echo 2025-02-10 09:59:26.082153 | orchestrator | + echo '# Status of RabbitMQ' 2025-02-10 09:59:26.082159 | orchestrator | + echo 2025-02-10 09:59:26.082167 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-02-10 09:59:26.082188 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-02-10 09:59:26.093530 | orchestrator | 2025-02-10 09:59:26.099666 | orchestrator | # Status of Redis 2025-02-10 09:59:26.099713 | orchestrator | 2025-02-10 09:59:26.099726 | orchestrator | + echo 2025-02-10 09:59:26.099737 | orchestrator | + echo '# Status of Redis' 2025-02-10 09:59:26.099749 | orchestrator | + echo 2025-02-10 09:59:26.099762 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-02-10 09:59:26.099783 | orchestrator | TCP OK - 0.001 second response time on 192.168.16.10 port 6379|time=0.001440s;;;0.000000;10.000000 2025-02-10 09:59:26.100274 | orchestrator | + popd 2025-02-10 09:59:27.872482 | orchestrator | 2025-02-10 09:59:27.872610 | orchestrator | # Create backup of MariaDB database 2025-02-10 09:59:27.872630 | orchestrator | + echo 2025-02-10 09:59:27.872693 | orchestrator | + echo '# Create backup of MariaDB database' 2025-02-10 09:59:27.872710 | orchestrator | 2025-02-10 09:59:27.872724 | orchestrator | + echo 2025-02-10 09:59:27.872739 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-02-10 09:59:27.872774 | orchestrator | 2025-02-10 09:59:27 | INFO  | Task cc285dfa-70ee-42cf-ab66-fd29c16c264f (mariadb_backup) was prepared for execution. 2025-02-10 09:59:31.501284 | orchestrator | 2025-02-10 09:59:27 | INFO  | It takes a moment until task cc285dfa-70ee-42cf-ab66-fd29c16c264f (mariadb_backup) has been started and output is visible here. 2025-02-10 09:59:31.501504 | orchestrator | 2025-02-10 09:59:31.502102 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:59:31.502140 | orchestrator | 2025-02-10 09:59:31.502166 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:59:31.506793 | orchestrator | Monday 10 February 2025 09:59:31 +0000 (0:00:00.298) 0:00:00.298 ******* 2025-02-10 09:59:31.739155 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:59:32.009735 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:59:32.009891 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:59:32.010508 | orchestrator | 2025-02-10 09:59:32.011835 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:59:32.012160 | orchestrator | Monday 10 February 2025 09:59:31 +0000 (0:00:00.507) 0:00:00.806 ******* 2025-02-10 09:59:32.682121 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-02-10 09:59:32.682418 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-02-10 09:59:32.683586 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-02-10 09:59:32.684387 | orchestrator | 2025-02-10 09:59:32.684405 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-02-10 09:59:32.684415 | orchestrator | 2025-02-10 09:59:32.684892 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-02-10 09:59:32.685551 | orchestrator | Monday 10 February 2025 09:59:32 +0000 (0:00:00.672) 0:00:01.478 ******* 2025-02-10 09:59:33.125972 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-02-10 09:59:33.126505 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-02-10 09:59:33.126547 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-02-10 09:59:33.126971 | orchestrator | 2025-02-10 09:59:33.127708 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-02-10 09:59:33.128064 | orchestrator | Monday 10 February 2025 09:59:33 +0000 (0:00:00.449) 0:00:01.928 ******* 2025-02-10 09:59:33.947924 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:59:33.948553 | orchestrator | 2025-02-10 09:59:33.948587 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-02-10 09:59:33.948750 | orchestrator | Monday 10 February 2025 09:59:33 +0000 (0:00:00.813) 0:00:02.741 ******* 2025-02-10 09:59:38.158576 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:59:38.160486 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:59:38.160527 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:59:38.160550 | orchestrator | 2025-02-10 09:59:38.160877 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-02-10 09:59:38.161434 | orchestrator | Monday 10 February 2025 09:59:38 +0000 (0:00:04.215) 0:00:06.957 ******* 2025-02-10 09:59:57.698580 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-02-10 09:59:57.778893 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-02-10 09:59:57.778994 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-02-10 09:59:57.779005 | orchestrator | mariadb_bootstrap_restart 2025-02-10 09:59:57.779026 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:59:57.779553 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:59:57.780189 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:59:57.780929 | orchestrator | 2025-02-10 09:59:57.784643 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-02-10 09:59:57.785102 | orchestrator | skipping: no hosts matched 2025-02-10 09:59:57.785122 | orchestrator | 2025-02-10 09:59:57.785130 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-02-10 09:59:57.785137 | orchestrator | skipping: no hosts matched 2025-02-10 09:59:57.785143 | orchestrator | 2025-02-10 09:59:57.785164 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-02-10 09:59:57.785172 | orchestrator | skipping: no hosts matched 2025-02-10 09:59:57.785179 | orchestrator | 2025-02-10 09:59:57.785185 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-02-10 09:59:57.785196 | orchestrator | 2025-02-10 09:59:57.785644 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-02-10 09:59:57.785881 | orchestrator | Monday 10 February 2025 09:59:57 +0000 (0:00:19.624) 0:00:26.581 ******* 2025-02-10 09:59:58.181197 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:59:58.308283 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:59:58.308909 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:59:58.309089 | orchestrator | 2025-02-10 09:59:58.309391 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-02-10 09:59:58.309425 | orchestrator | Monday 10 February 2025 09:59:58 +0000 (0:00:00.522) 0:00:27.104 ******* 2025-02-10 09:59:58.496543 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:59:58.563808 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:59:58.566677 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:59:58.567252 | orchestrator | 2025-02-10 09:59:58.567930 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:59:58.568297 | orchestrator | 2025-02-10 09:59:58 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:59:58.569249 | orchestrator | 2025-02-10 09:59:58 | INFO  | Please wait and do not abort execution. 2025-02-10 09:59:58.569957 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:59:58.570750 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-10 09:59:58.571802 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-10 09:59:58.572520 | orchestrator | 2025-02-10 09:59:58.573151 | orchestrator | 2025-02-10 09:59:58.574405 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:59:58.575597 | orchestrator | Monday 10 February 2025 09:59:58 +0000 (0:00:00.260) 0:00:27.365 ******* 2025-02-10 09:59:58.576801 | orchestrator | =============================================================================== 2025-02-10 09:59:58.576836 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 19.62s 2025-02-10 09:59:58.577958 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 4.22s 2025-02-10 09:59:58.580424 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.81s 2025-02-10 09:59:58.584730 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.67s 2025-02-10 09:59:58.586358 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.52s 2025-02-10 09:59:58.586399 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.51s 2025-02-10 09:59:58.588414 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.45s 2025-02-10 09:59:58.589287 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.26s 2025-02-10 09:59:59.374004 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=incremental 2025-02-10 10:00:01.129146 | orchestrator | 2025-02-10 10:00:01 | INFO  | Task ab8bee1d-f988-4846-a220-47b8e2d05e40 (mariadb_backup) was prepared for execution. 2025-02-10 10:00:04.660783 | orchestrator | 2025-02-10 10:00:01 | INFO  | It takes a moment until task ab8bee1d-f988-4846-a220-47b8e2d05e40 (mariadb_backup) has been started and output is visible here. 2025-02-10 10:00:04.661039 | orchestrator | 2025-02-10 10:00:04.664771 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 10:00:04.664812 | orchestrator | 2025-02-10 10:00:04.665657 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 10:00:04.666681 | orchestrator | Monday 10 February 2025 10:00:04 +0000 (0:00:00.291) 0:00:00.291 ******* 2025-02-10 10:00:04.895052 | orchestrator | ok: [testbed-node-0] 2025-02-10 10:00:05.173003 | orchestrator | ok: [testbed-node-1] 2025-02-10 10:00:05.173405 | orchestrator | ok: [testbed-node-2] 2025-02-10 10:00:05.173452 | orchestrator | 2025-02-10 10:00:05.174502 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 10:00:05.179711 | orchestrator | Monday 10 February 2025 10:00:05 +0000 (0:00:00.511) 0:00:00.803 ******* 2025-02-10 10:00:05.849902 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-02-10 10:00:05.850350 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-02-10 10:00:05.850413 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-02-10 10:00:05.851392 | orchestrator | 2025-02-10 10:00:05.852131 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-02-10 10:00:05.852179 | orchestrator | 2025-02-10 10:00:05.852971 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-02-10 10:00:05.853055 | orchestrator | Monday 10 February 2025 10:00:05 +0000 (0:00:00.676) 0:00:01.480 ******* 2025-02-10 10:00:06.314148 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-02-10 10:00:06.314365 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-02-10 10:00:06.314390 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-02-10 10:00:06.314412 | orchestrator | 2025-02-10 10:00:06.314716 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-02-10 10:00:06.314825 | orchestrator | Monday 10 February 2025 10:00:06 +0000 (0:00:00.468) 0:00:01.948 ******* 2025-02-10 10:00:07.011949 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 10:00:07.013146 | orchestrator | 2025-02-10 10:00:07.015518 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-02-10 10:00:07.017644 | orchestrator | Monday 10 February 2025 10:00:06 +0000 (0:00:00.690) 0:00:02.639 ******* 2025-02-10 10:00:11.109787 | orchestrator | ok: [testbed-node-1] 2025-02-10 10:00:11.110549 | orchestrator | ok: [testbed-node-0] 2025-02-10 10:00:11.110646 | orchestrator | ok: [testbed-node-2] 2025-02-10 10:00:11.110687 | orchestrator | 2025-02-10 10:00:11.111506 | orchestrator | TASK [mariadb : Taking incremental database backup via Mariabackup] ************ 2025-02-10 10:00:11.111989 | orchestrator | Monday 10 February 2025 10:00:11 +0000 (0:00:04.102) 0:00:06.741 ******* 2025-02-10 10:00:29.519196 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-02-10 10:00:29.519889 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-02-10 10:00:29.519992 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-02-10 10:00:29.520021 | orchestrator | mariadb_bootstrap_restart 2025-02-10 10:00:29.606921 | orchestrator | skipping: [testbed-node-1] 2025-02-10 10:00:29.609455 | orchestrator | skipping: [testbed-node-2] 2025-02-10 10:00:29.612831 | orchestrator | changed: [testbed-node-0] 2025-02-10 10:00:29.612899 | orchestrator | 2025-02-10 10:00:29.624814 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-02-10 10:00:29.624936 | orchestrator | skipping: no hosts matched 2025-02-10 10:00:29.624964 | orchestrator | 2025-02-10 10:00:29.624990 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-02-10 10:00:29.625033 | orchestrator | skipping: no hosts matched 2025-02-10 10:00:29.629930 | orchestrator | 2025-02-10 10:00:30.125330 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-02-10 10:00:30.125404 | orchestrator | skipping: no hosts matched 2025-02-10 10:00:30.125424 | orchestrator | 2025-02-10 10:00:30.125440 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-02-10 10:00:30.125454 | orchestrator | 2025-02-10 10:00:30.125468 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-02-10 10:00:30.125483 | orchestrator | Monday 10 February 2025 10:00:29 +0000 (0:00:18.497) 0:00:25.238 ******* 2025-02-10 10:00:30.125509 | orchestrator | skipping: [testbed-node-0] 2025-02-10 10:00:30.277106 | orchestrator | skipping: [testbed-node-1] 2025-02-10 10:00:30.277309 | orchestrator | skipping: [testbed-node-2] 2025-02-10 10:00:30.277757 | orchestrator | 2025-02-10 10:00:30.277861 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-02-10 10:00:30.282683 | orchestrator | Monday 10 February 2025 10:00:30 +0000 (0:00:00.668) 0:00:25.907 ******* 2025-02-10 10:00:30.502895 | orchestrator | skipping: [testbed-node-0] 2025-02-10 10:00:30.542416 | orchestrator | skipping: [testbed-node-1] 2025-02-10 10:00:30.551431 | orchestrator | skipping: [testbed-node-2] 2025-02-10 10:00:30.553312 | orchestrator | 2025-02-10 10:00:30.553348 | orchestrator | 2025-02-10 10:00:30 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 10:00:30.553428 | orchestrator | 2025-02-10 10:00:30 | INFO  | Please wait and do not abort execution. 2025-02-10 10:00:30.553444 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 10:00:30.556185 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 10:00:30.558280 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-10 10:00:30.558455 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-10 10:00:30.560436 | orchestrator | 2025-02-10 10:00:30.560912 | orchestrator | 2025-02-10 10:00:30.560954 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 10:00:30.560980 | orchestrator | Monday 10 February 2025 10:00:30 +0000 (0:00:00.269) 0:00:26.177 ******* 2025-02-10 10:00:30.561003 | orchestrator | =============================================================================== 2025-02-10 10:00:30.561027 | orchestrator | mariadb : Taking incremental database backup via Mariabackup ----------- 18.50s 2025-02-10 10:00:30.561051 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 4.10s 2025-02-10 10:00:30.561085 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.69s 2025-02-10 10:00:30.561975 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.68s 2025-02-10 10:00:30.562563 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.67s 2025-02-10 10:00:30.563027 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.51s 2025-02-10 10:00:30.563448 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.47s 2025-02-10 10:00:30.564032 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.27s 2025-02-10 10:00:31.286543 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-02-10 10:00:31.293198 | orchestrator | + set -e 2025-02-10 10:00:31.294220 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-02-10 10:00:31.294254 | orchestrator | ++ export INTERACTIVE=false 2025-02-10 10:00:31.294268 | orchestrator | ++ INTERACTIVE=false 2025-02-10 10:00:31.294280 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-02-10 10:00:31.294292 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-02-10 10:00:31.294305 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-02-10 10:00:31.294324 | orchestrator | +++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2025-02-10 10:00:31.336900 | orchestrator | 2025-02-10 10:00:39.078857 | orchestrator | # OpenStack endpoints 2025-02-10 10:00:39.078980 | orchestrator | 2025-02-10 10:00:39.078994 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-02-10 10:00:39.079003 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-02-10 10:00:39.079011 | orchestrator | + export OS_CLOUD=admin 2025-02-10 10:00:39.079020 | orchestrator | + OS_CLOUD=admin 2025-02-10 10:00:39.079028 | orchestrator | + echo 2025-02-10 10:00:39.079047 | orchestrator | + echo '# OpenStack endpoints' 2025-02-10 10:00:39.079055 | orchestrator | + echo 2025-02-10 10:00:39.079065 | orchestrator | + openstack endpoint list 2025-02-10 10:00:39.079086 | orchestrator | +----------------------------------+-----------+------------------+-------------------------+---------+-----------+---------------------------------------------------------------------+ 2025-02-10 10:00:39.079093 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-02-10 10:00:39.079099 | orchestrator | +----------------------------------+-----------+------------------+-------------------------+---------+-----------+---------------------------------------------------------------------+ 2025-02-10 10:00:39.079104 | orchestrator | | 021762b6016b418ca400df8220598409 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-02-10 10:00:39.079109 | orchestrator | | 0798186d9d014e5683f260f6489c7a93 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-02-10 10:00:39.079114 | orchestrator | | 0b872f07d428448ab1c44c5560de65e2 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-02-10 10:00:39.079118 | orchestrator | | 12c8a2b0dbcc4fe8bc4509df2c6363db | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-02-10 10:00:39.079140 | orchestrator | | 195a8ee56248419c8af2a7586322c5dc | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-02-10 10:00:39.079145 | orchestrator | | 1f28ca43a7f6439d9bd717496835e661 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-02-10 10:00:39.079150 | orchestrator | | 20c8c9054b9e41d288fd6889d6af1b82 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-02-10 10:00:39.079157 | orchestrator | | 23d19500ca6744ed8cabbb3e601999ae | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-02-10 10:00:39.079163 | orchestrator | | 3ab8c759815844fa88b0e72849902376 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-02-10 10:00:39.079168 | orchestrator | | 4e795de9cf034e92b8974bddb3d958c9 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-02-10 10:00:39.079172 | orchestrator | | 554a42efc3aa4b15871e7d930d31e65e | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-02-10 10:00:39.079177 | orchestrator | | 6a837818d1b7449b8a20a16d16ab8b6c | RegionOne | ironic | baremetal | True | internal | https://api-int.testbed.osism.xyz:6385 | 2025-02-10 10:00:39.079182 | orchestrator | | 6c1ed8722c5c441db63ab81a0149265b | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-02-10 10:00:39.079187 | orchestrator | | 6d6812f80e2e44c3a5d545fc18068d74 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-02-10 10:00:39.079192 | orchestrator | | 70fcfcb5be854c3980e7e61a91821b7c | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-02-10 10:00:39.079197 | orchestrator | | 8571743e355c49e89e47ad37eed208b0 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-02-10 10:00:39.079201 | orchestrator | | 8b9613e2ec914a9f898792f7c10902fd | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-02-10 10:00:39.079206 | orchestrator | | 9487b1bff16d49b38430d0e37db87017 | RegionOne | ironic | baremetal | True | public | https://api.testbed.osism.xyz:6385 | 2025-02-10 10:00:39.079211 | orchestrator | | a73aa6b79efe4424a4f09652e3e40779 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-02-10 10:00:39.079221 | orchestrator | | ab8f92c335af495cb807dce369586330 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-02-10 10:00:39.443179 | orchestrator | | cf448cf7493f4700a9e03dadaa534796 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-02-10 10:00:39.443312 | orchestrator | | e0f05f7ef2654720b909291a25d0910a | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-02-10 10:00:39.443331 | orchestrator | | e94abb73b2cd4321aeb3f2b9c5d35531 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-02-10 10:00:39.443375 | orchestrator | | f438a70c807045fcad40d00056505c08 | RegionOne | ironic-inspector | baremetal-introspection | True | public | https://api.testbed.osism.xyz:5050 | 2025-02-10 10:00:39.443390 | orchestrator | | fb197d08c88a40a282ddb0b256b84a38 | RegionOne | ironic-inspector | baremetal-introspection | True | internal | https://api-int.testbed.osism.xyz:5050 | 2025-02-10 10:00:39.443414 | orchestrator | | fd53345820514d6fa2915f3fe31c93f8 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-02-10 10:00:39.443429 | orchestrator | +----------------------------------+-----------+------------------+-------------------------+---------+-----------+---------------------------------------------------------------------+ 2025-02-10 10:00:39.443462 | orchestrator | 2025-02-10 10:00:42.540234 | orchestrator | # Cinder 2025-02-10 10:00:42.540371 | orchestrator | 2025-02-10 10:00:42.540391 | orchestrator | + echo 2025-02-10 10:00:42.540406 | orchestrator | + echo '# Cinder' 2025-02-10 10:00:42.540442 | orchestrator | + echo 2025-02-10 10:00:42.540457 | orchestrator | + openstack volume service list 2025-02-10 10:00:42.540492 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-02-10 10:00:42.897019 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-02-10 10:00:42.897147 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-02-10 10:00:42.897165 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-02-10T10:00:38.000000 | 2025-02-10 10:00:42.897201 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-02-10T10:00:38.000000 | 2025-02-10 10:00:42.897217 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-02-10T10:00:39.000000 | 2025-02-10 10:00:42.897232 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-02-10T10:00:38.000000 | 2025-02-10 10:00:42.897246 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-02-10T10:00:39.000000 | 2025-02-10 10:00:42.897260 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-02-10T10:00:41.000000 | 2025-02-10 10:00:42.897274 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-02-10T10:00:41.000000 | 2025-02-10 10:00:42.897288 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-02-10T10:00:41.000000 | 2025-02-10 10:00:42.897302 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-02-10T10:00:32.000000 | 2025-02-10 10:00:42.897316 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-02-10 10:00:42.897347 | orchestrator | 2025-02-10 10:00:46.136953 | orchestrator | # Neutron 2025-02-10 10:00:46.137078 | orchestrator | 2025-02-10 10:00:46.137094 | orchestrator | + echo 2025-02-10 10:00:46.137105 | orchestrator | + echo '# Neutron' 2025-02-10 10:00:46.137117 | orchestrator | + echo 2025-02-10 10:00:46.137128 | orchestrator | + openstack network agent list 2025-02-10 10:00:46.137155 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-02-10 10:00:46.497811 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-02-10 10:00:46.497935 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-02-10 10:00:46.497951 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-02-10 10:00:46.497963 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-02-10 10:00:46.498006 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-02-10 10:00:46.498072 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-02-10 10:00:46.498085 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-02-10 10:00:46.498096 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-02-10 10:00:46.498107 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-02-10 10:00:46.498119 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-02-10 10:00:46.498130 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-02-10 10:00:46.498141 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-02-10 10:00:46.498184 | orchestrator | + openstack network service provider list 2025-02-10 10:00:49.421302 | orchestrator | +---------------+------+---------+ 2025-02-10 10:00:49.768057 | orchestrator | | Service Type | Name | Default | 2025-02-10 10:00:49.768184 | orchestrator | +---------------+------+---------+ 2025-02-10 10:00:49.768202 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-02-10 10:00:49.768218 | orchestrator | +---------------+------+---------+ 2025-02-10 10:00:49.768250 | orchestrator | 2025-02-10 10:00:52.882405 | orchestrator | # Nova 2025-02-10 10:00:52.882547 | orchestrator | 2025-02-10 10:00:52.882569 | orchestrator | + echo 2025-02-10 10:00:52.882669 | orchestrator | + echo '# Nova' 2025-02-10 10:00:52.882691 | orchestrator | + echo 2025-02-10 10:00:52.882706 | orchestrator | + openstack compute service list 2025-02-10 10:00:52.882741 | orchestrator | +--------------------------------------+----------------+-----------------------+----------+---------+-------+----------------------------+ 2025-02-10 10:00:53.272187 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-02-10 10:00:53.272335 | orchestrator | +--------------------------------------+----------------+-----------------------+----------+---------+-------+----------------------------+ 2025-02-10 10:00:53.272367 | orchestrator | | 906d3db9-559c-4f44-8ae7-134bf0f583e5 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-02-10T10:00:42.000000 | 2025-02-10 10:00:53.272394 | orchestrator | | 63400353-42e4-44a3-b25d-6c9348e02640 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-02-10T10:00:52.000000 | 2025-02-10 10:00:53.272420 | orchestrator | | 72809765-7152-4ff9-8541-80886d2b6e42 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-02-10T10:00:44.000000 | 2025-02-10 10:00:53.272447 | orchestrator | | ab665a31-ab60-4603-a8de-82d32df0fdd3 | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-02-10T10:00:49.000000 | 2025-02-10 10:00:53.272474 | orchestrator | | 0689df5d-0d47-4fdf-a10e-1f6b6902b6ef | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-02-10T10:00:50.000000 | 2025-02-10 10:00:53.272500 | orchestrator | | 840c7b04-5768-4369-b6e3-01808156a91f | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-02-10T10:00:51.000000 | 2025-02-10 10:00:53.272526 | orchestrator | | 3dd7c07d-0a48-4489-a39e-40e9a25ec497 | nova-compute | testbed-node-5 | nova | enabled | up | 2025-02-10T10:00:52.000000 | 2025-02-10 10:00:53.272578 | orchestrator | | 7fa7f6a0-a708-4832-aae5-5dd52704aa0c | nova-compute | testbed-node-3 | nova | enabled | up | 2025-02-10T10:00:52.000000 | 2025-02-10 10:00:53.272687 | orchestrator | | f5b830db-d6f2-4524-8847-a513e772803c | nova-compute | testbed-node-4 | nova | enabled | up | 2025-02-10T10:00:52.000000 | 2025-02-10 10:00:53.272748 | orchestrator | | 1d33be43-a682-4f26-9195-a82988ce309e | nova-compute | testbed-node-0-ironic | nova | enabled | up | 2025-02-10T10:00:43.000000 | 2025-02-10 10:00:53.272774 | orchestrator | | c1dde77b-b608-4a70-8282-29193be38231 | nova-compute | testbed-node-1-ironic | nova | enabled | up | 2025-02-10T10:00:45.000000 | 2025-02-10 10:00:53.272800 | orchestrator | | 068a53c2-5da0-41eb-ad7a-0879b5581081 | nova-compute | testbed-node-2-ironic | nova | enabled | up | 2025-02-10T10:00:46.000000 | 2025-02-10 10:00:53.272825 | orchestrator | +--------------------------------------+----------------+-----------------------+----------+---------+-------+----------------------------+ 2025-02-10 10:00:53.272872 | orchestrator | + openstack hypervisor list 2025-02-10 10:00:56.484576 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-02-10 10:00:56.838070 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-02-10 10:00:56.838175 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-02-10 10:00:56.838189 | orchestrator | | 622b4f99-5779-4eca-92fe-16f40183a8c0 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-02-10 10:00:56.838199 | orchestrator | | 3318d739-167d-497c-aceb-8a69d32f333b | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-02-10 10:00:56.838209 | orchestrator | | f092e876-1167-412c-a40a-db5da8e907ac | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-02-10 10:00:56.838218 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-02-10 10:00:56.838239 | orchestrator | 2025-02-10 10:00:58.528741 | orchestrator | # Run OpenStack test play 2025-02-10 10:00:58.528881 | orchestrator | 2025-02-10 10:00:58.528903 | orchestrator | + echo 2025-02-10 10:00:58.528919 | orchestrator | + echo '# Run OpenStack test play' 2025-02-10 10:00:58.528936 | orchestrator | + echo 2025-02-10 10:00:58.528950 | orchestrator | + osism apply --environment openstack test 2025-02-10 10:00:58.528982 | orchestrator | 2025-02-10 10:00:58 | INFO  | Trying to run play test in environment openstack 2025-02-10 10:00:58.582939 | orchestrator | 2025-02-10 10:00:58 | INFO  | Task 6940f92b-1322-417b-ae23-e586fbfd5651 (test) was prepared for execution. 2025-02-10 10:01:02.156162 | orchestrator | 2025-02-10 10:00:58 | INFO  | It takes a moment until task 6940f92b-1322-417b-ae23-e586fbfd5651 (test) has been started and output is visible here. 2025-02-10 10:01:02.156340 | orchestrator | 2025-02-10 10:01:02.158193 | orchestrator | PLAY [Create test project] ***************************************************** 2025-02-10 10:01:02.158245 | orchestrator | 2025-02-10 10:01:02.159520 | orchestrator | TASK [Create test domain] ****************************************************** 2025-02-10 10:01:02.161168 | orchestrator | Monday 10 February 2025 10:01:02 +0000 (0:00:00.083) 0:00:00.083 ******* 2025-02-10 10:01:05.511853 | orchestrator | changed: [localhost] 2025-02-10 10:01:09.522099 | orchestrator | 2025-02-10 10:01:09.522254 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-02-10 10:01:09.522274 | orchestrator | Monday 10 February 2025 10:01:05 +0000 (0:00:03.361) 0:00:03.444 ******* 2025-02-10 10:01:09.522306 | orchestrator | changed: [localhost] 2025-02-10 10:01:14.790216 | orchestrator | 2025-02-10 10:01:14.791111 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-02-10 10:01:14.791153 | orchestrator | Monday 10 February 2025 10:01:09 +0000 (0:00:04.008) 0:00:07.453 ******* 2025-02-10 10:01:14.791190 | orchestrator | changed: [localhost] 2025-02-10 10:01:18.613937 | orchestrator | 2025-02-10 10:01:18.614222 | orchestrator | TASK [Create test project] ***************************************************** 2025-02-10 10:01:18.614248 | orchestrator | Monday 10 February 2025 10:01:14 +0000 (0:00:05.267) 0:00:12.720 ******* 2025-02-10 10:01:18.614308 | orchestrator | changed: [localhost] 2025-02-10 10:01:18.614687 | orchestrator | 2025-02-10 10:01:18.614738 | orchestrator | TASK [Create test user] ******************************************************** 2025-02-10 10:01:18.614816 | orchestrator | Monday 10 February 2025 10:01:18 +0000 (0:00:03.826) 0:00:16.546 ******* 2025-02-10 10:01:22.792974 | orchestrator | changed: [localhost] 2025-02-10 10:01:22.793215 | orchestrator | 2025-02-10 10:01:22.793239 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-02-10 10:01:22.793264 | orchestrator | Monday 10 February 2025 10:01:22 +0000 (0:00:04.177) 0:00:20.724 ******* 2025-02-10 10:01:34.177340 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-02-10 10:01:38.935890 | orchestrator | changed: [localhost] => (item=member) 2025-02-10 10:01:38.936038 | orchestrator | changed: [localhost] => (item=creator) 2025-02-10 10:01:38.936060 | orchestrator | 2025-02-10 10:01:38.936161 | orchestrator | TASK [Create test server group] ************************************************ 2025-02-10 10:01:38.936177 | orchestrator | Monday 10 February 2025 10:01:34 +0000 (0:00:11.380) 0:00:32.104 ******* 2025-02-10 10:01:38.936210 | orchestrator | changed: [localhost] 2025-02-10 10:01:38.936480 | orchestrator | 2025-02-10 10:01:38.936506 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-02-10 10:01:38.936527 | orchestrator | Monday 10 February 2025 10:01:38 +0000 (0:00:04.764) 0:00:36.869 ******* 2025-02-10 10:01:43.781898 | orchestrator | changed: [localhost] 2025-02-10 10:01:43.782638 | orchestrator | 2025-02-10 10:01:43.782688 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-02-10 10:01:43.783093 | orchestrator | Monday 10 February 2025 10:01:43 +0000 (0:00:04.843) 0:00:41.712 ******* 2025-02-10 10:01:47.694960 | orchestrator | changed: [localhost] 2025-02-10 10:01:47.695203 | orchestrator | 2025-02-10 10:01:47.695240 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-02-10 10:01:47.695536 | orchestrator | Monday 10 February 2025 10:01:47 +0000 (0:00:03.914) 0:00:45.627 ******* 2025-02-10 10:01:51.276031 | orchestrator | changed: [localhost] 2025-02-10 10:01:51.277578 | orchestrator | 2025-02-10 10:01:51.277635 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-02-10 10:01:51.279464 | orchestrator | Monday 10 February 2025 10:01:51 +0000 (0:00:03.580) 0:00:49.208 ******* 2025-02-10 10:01:54.993976 | orchestrator | changed: [localhost] 2025-02-10 10:01:54.995192 | orchestrator | 2025-02-10 10:01:54.995234 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-02-10 10:01:54.995259 | orchestrator | Monday 10 February 2025 10:01:54 +0000 (0:00:03.717) 0:00:52.926 ******* 2025-02-10 10:01:58.813234 | orchestrator | changed: [localhost] 2025-02-10 10:01:58.813389 | orchestrator | 2025-02-10 10:01:58.813417 | orchestrator | TASK [Create test network topology] ******************************************** 2025-02-10 10:01:58.814143 | orchestrator | Monday 10 February 2025 10:01:58 +0000 (0:00:03.818) 0:00:56.744 ******* 2025-02-10 10:02:15.464759 | orchestrator | changed: [localhost] 2025-02-10 10:04:32.014183 | orchestrator | 2025-02-10 10:04:32.014389 | orchestrator | TASK [Create test instances] *************************************************** 2025-02-10 10:04:32.014407 | orchestrator | Monday 10 February 2025 10:02:15 +0000 (0:00:16.650) 0:01:13.395 ******* 2025-02-10 10:04:32.014434 | orchestrator | changed: [localhost] => (item=test) 2025-02-10 10:04:32.016762 | orchestrator | changed: [localhost] => (item=test-1) 2025-02-10 10:04:32.016804 | orchestrator | changed: [localhost] => (item=test-2) 2025-02-10 10:04:32.017549 | orchestrator | 2025-02-10 10:04:32.018432 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-02-10 10:05:02.015112 | orchestrator | changed: [localhost] => (item=test-3) 2025-02-10 10:05:02.015578 | orchestrator | 2025-02-10 10:05:02.015623 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-02-10 10:05:12.631598 | orchestrator | changed: [localhost] => (item=test-4) 2025-02-10 10:05:36.662765 | orchestrator | 2025-02-10 10:05:36.662937 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-02-10 10:05:36.662966 | orchestrator | Monday 10 February 2025 10:05:12 +0000 (0:02:57.165) 0:04:10.560 ******* 2025-02-10 10:05:36.663005 | orchestrator | changed: [localhost] => (item=test) 2025-02-10 10:05:36.663556 | orchestrator | changed: [localhost] => (item=test-1) 2025-02-10 10:05:36.664858 | orchestrator | changed: [localhost] => (item=test-2) 2025-02-10 10:05:36.664899 | orchestrator | changed: [localhost] => (item=test-3) 2025-02-10 10:05:36.664922 | orchestrator | changed: [localhost] => (item=test-4) 2025-02-10 10:05:36.664932 | orchestrator | 2025-02-10 10:05:36.664942 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-02-10 10:05:36.664958 | orchestrator | Monday 10 February 2025 10:05:36 +0000 (0:00:24.033) 0:04:34.594 ******* 2025-02-10 10:06:06.146567 | orchestrator | changed: [localhost] => (item=test) 2025-02-10 10:06:06.147150 | orchestrator | changed: [localhost] => (item=test-1) 2025-02-10 10:06:06.147191 | orchestrator | changed: [localhost] => (item=test-2) 2025-02-10 10:06:06.147207 | orchestrator | changed: [localhost] => (item=test-3) 2025-02-10 10:06:06.147222 | orchestrator | changed: [localhost] => (item=test-4) 2025-02-10 10:06:06.147245 | orchestrator | 2025-02-10 10:06:06.147459 | orchestrator | TASK [Create test volume] ****************************************************** 2025-02-10 10:06:06.148267 | orchestrator | Monday 10 February 2025 10:06:06 +0000 (0:00:29.484) 0:05:04.078 ******* 2025-02-10 10:06:13.508550 | orchestrator | changed: [localhost] 2025-02-10 10:06:13.508798 | orchestrator | 2025-02-10 10:06:13.509583 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-02-10 10:06:13.511507 | orchestrator | Monday 10 February 2025 10:06:13 +0000 (0:00:07.362) 0:05:11.441 ******* 2025-02-10 10:06:23.369832 | orchestrator | changed: [localhost] 2025-02-10 10:06:23.371049 | orchestrator | 2025-02-10 10:06:23.371105 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-02-10 10:06:23.371133 | orchestrator | Monday 10 February 2025 10:06:23 +0000 (0:00:09.861) 0:05:21.302 ******* 2025-02-10 10:06:28.892390 | orchestrator | ok: [localhost] 2025-02-10 10:06:28.894589 | orchestrator | 2025-02-10 10:06:28.896661 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-02-10 10:06:28.897721 | orchestrator | Monday 10 February 2025 10:06:28 +0000 (0:00:05.522) 0:05:26.824 ******* 2025-02-10 10:06:28.933506 | orchestrator | ok: [localhost] => { 2025-02-10 10:06:28.934512 | orchestrator |  "msg": "192.168.112.116" 2025-02-10 10:06:28.935289 | orchestrator | } 2025-02-10 10:06:28.935683 | orchestrator | 2025-02-10 10:06:28.935961 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 10:06:28.936902 | orchestrator | 2025-02-10 10:06:28 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 10:06:28.937353 | orchestrator | 2025-02-10 10:06:28 | INFO  | Please wait and do not abort execution. 2025-02-10 10:06:28.937392 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 10:06:28.937417 | orchestrator | 2025-02-10 10:06:28.938178 | orchestrator | Monday 10 February 2025 10:06:28 +0000 (0:00:00.043) 0:05:26.868 ******* 2025-02-10 10:06:28.938921 | orchestrator | =============================================================================== 2025-02-10 10:06:28.938949 | orchestrator | Create test instances ------------------------------------------------- 177.17s 2025-02-10 10:06:28.938970 | orchestrator | Add tag to instances --------------------------------------------------- 29.48s 2025-02-10 10:06:28.940581 | orchestrator | Add metadata to instances ---------------------------------------------- 24.03s 2025-02-10 10:06:28.941248 | orchestrator | Create test network topology ------------------------------------------- 16.65s 2025-02-10 10:06:28.941852 | orchestrator | Add member roles to user test ------------------------------------------ 11.38s 2025-02-10 10:06:28.942196 | orchestrator | Attach test volume ------------------------------------------------------ 9.86s 2025-02-10 10:06:28.942673 | orchestrator | Create test volume ------------------------------------------------------ 7.36s 2025-02-10 10:06:28.943000 | orchestrator | Create floating ip address ---------------------------------------------- 5.52s 2025-02-10 10:06:28.943164 | orchestrator | Add manager role to user test-admin ------------------------------------- 5.27s 2025-02-10 10:06:28.943967 | orchestrator | Create ssh security group ----------------------------------------------- 4.84s 2025-02-10 10:06:28.944275 | orchestrator | Create test server group ------------------------------------------------ 4.76s 2025-02-10 10:06:28.945410 | orchestrator | Create test user -------------------------------------------------------- 4.18s 2025-02-10 10:06:28.946349 | orchestrator | Create test-admin user -------------------------------------------------- 4.01s 2025-02-10 10:06:28.946399 | orchestrator | Add rule to ssh security group ------------------------------------------ 3.91s 2025-02-10 10:06:28.947611 | orchestrator | Create test project ----------------------------------------------------- 3.83s 2025-02-10 10:06:28.948489 | orchestrator | Create test keypair ----------------------------------------------------- 3.82s 2025-02-10 10:06:28.949726 | orchestrator | Add rule to icmp security group ----------------------------------------- 3.72s 2025-02-10 10:06:28.950236 | orchestrator | Create icmp security group ---------------------------------------------- 3.58s 2025-02-10 10:06:28.950808 | orchestrator | Create test domain ------------------------------------------------------ 3.36s 2025-02-10 10:06:28.952318 | orchestrator | Print floating ip address ----------------------------------------------- 0.04s 2025-02-10 10:06:29.557446 | orchestrator | + server_list 2025-02-10 10:06:34.226642 | orchestrator | + openstack --os-cloud test server list 2025-02-10 10:06:34.226815 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-02-10 10:06:34.582302 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-02-10 10:06:34.582428 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-02-10 10:06:34.582446 | orchestrator | | 0ba45d81-1c05-49fd-82b1-b5bc4ff7a5ae | test-4 | ACTIVE | auto_allocated_network=10.42.0.36, 192.168.112.193 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-02-10 10:06:34.582519 | orchestrator | | f417fda9-1976-4596-aba3-11c77dea8c72 | test-3 | ACTIVE | auto_allocated_network=10.42.0.7, 192.168.112.175 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-02-10 10:06:34.582537 | orchestrator | | 4c7f391b-6f68-4995-a1ef-4e9f85ef9336 | test-2 | ACTIVE | auto_allocated_network=10.42.0.39, 192.168.112.141 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-02-10 10:06:34.582552 | orchestrator | | 7b1ecc50-ad94-4392-847d-04f8d641f1da | test-1 | ACTIVE | auto_allocated_network=10.42.0.41, 192.168.112.192 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-02-10 10:06:34.582566 | orchestrator | | 60a2826a-d81e-46ee-b112-e2aa0d5b4276 | test | ACTIVE | auto_allocated_network=10.42.0.50, 192.168.112.116 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-02-10 10:06:34.582580 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-02-10 10:06:34.582613 | orchestrator | + openstack --os-cloud test server show test 2025-02-10 10:06:38.282307 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-02-10 10:06:38.282452 | orchestrator | | Field | Value | 2025-02-10 10:06:38.282513 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-02-10 10:06:38.282564 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-02-10 10:06:38.282580 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-02-10 10:06:38.282594 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-02-10 10:06:38.282610 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-02-10 10:06:38.282636 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-02-10 10:06:38.282661 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-02-10 10:06:38.282686 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-02-10 10:06:38.282711 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-02-10 10:06:38.282761 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-02-10 10:06:38.282790 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-02-10 10:06:38.282814 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-02-10 10:06:38.282849 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-02-10 10:06:38.282864 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-02-10 10:06:38.282878 | orchestrator | | OS-EXT-STS:task_state | None | 2025-02-10 10:06:38.282892 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-02-10 10:06:38.282905 | orchestrator | | OS-SRV-USG:launched_at | 2025-02-10T10:02:39.000000 | 2025-02-10 10:06:38.282919 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-02-10 10:06:38.282933 | orchestrator | | accessIPv4 | | 2025-02-10 10:06:38.282951 | orchestrator | | accessIPv6 | | 2025-02-10 10:06:38.282965 | orchestrator | | addresses | auto_allocated_network=10.42.0.50, 192.168.112.116 | 2025-02-10 10:06:38.282986 | orchestrator | | config_drive | | 2025-02-10 10:06:38.283001 | orchestrator | | created | 2025-02-10T10:02:24Z | 2025-02-10 10:06:38.283029 | orchestrator | | description | None | 2025-02-10 10:06:38.283043 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-02-10 10:06:38.283057 | orchestrator | | hostId | 64692650efb6418423ccfea64d765f09b0741da5211c0a16fbf01ba6 | 2025-02-10 10:06:38.283071 | orchestrator | | host_status | None | 2025-02-10 10:06:38.283085 | orchestrator | | id | 60a2826a-d81e-46ee-b112-e2aa0d5b4276 | 2025-02-10 10:06:38.283099 | orchestrator | | image | Cirros 0.6.2 (c7d8aa39-9168-47aa-a3da-7f093edb56ee) | 2025-02-10 10:06:38.283113 | orchestrator | | key_name | test | 2025-02-10 10:06:38.283139 | orchestrator | | locked | False | 2025-02-10 10:06:38.283154 | orchestrator | | locked_reason | None | 2025-02-10 10:06:38.283168 | orchestrator | | name | test | 2025-02-10 10:06:38.283189 | orchestrator | | pinned_availability_zone | None | 2025-02-10 10:06:38.283211 | orchestrator | | progress | 0 | 2025-02-10 10:06:38.283225 | orchestrator | | project_id | f5047d9482a147609746840e312130f7 | 2025-02-10 10:06:38.283239 | orchestrator | | properties | hostname='test' | 2025-02-10 10:06:38.283254 | orchestrator | | security_groups | name='ssh' | 2025-02-10 10:06:38.283267 | orchestrator | | | name='icmp' | 2025-02-10 10:06:38.283281 | orchestrator | | server_groups | None | 2025-02-10 10:06:38.283300 | orchestrator | | status | ACTIVE | 2025-02-10 10:06:38.283314 | orchestrator | | tags | test | 2025-02-10 10:06:38.283328 | orchestrator | | trusted_image_certificates | None | 2025-02-10 10:06:38.283341 | orchestrator | | updated | 2025-02-10T10:05:17Z | 2025-02-10 10:06:38.283361 | orchestrator | | user_id | 903b03f6a9e6448a877fedde0c8a040f | 2025-02-10 10:06:38.618675 | orchestrator | | volumes_attached | delete_on_termination='False', id='2eb45754-a3e3-4847-af05-d992015681b8' | 2025-02-10 10:06:38.618835 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-02-10 10:06:38.618887 | orchestrator | + openstack --os-cloud test server show test-1 2025-02-10 10:06:42.184864 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-02-10 10:06:42.184991 | orchestrator | | Field | Value | 2025-02-10 10:06:42.185012 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-02-10 10:06:42.185028 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-02-10 10:06:42.185061 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-02-10 10:06:42.185078 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-02-10 10:06:42.185093 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-02-10 10:06:42.185109 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-02-10 10:06:42.185146 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-02-10 10:06:42.185163 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-02-10 10:06:42.185178 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-02-10 10:06:42.185206 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-02-10 10:06:42.185223 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-02-10 10:06:42.185238 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-02-10 10:06:42.185258 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-02-10 10:06:42.185274 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-02-10 10:06:42.185289 | orchestrator | | OS-EXT-STS:task_state | None | 2025-02-10 10:06:42.185304 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-02-10 10:06:42.185319 | orchestrator | | OS-SRV-USG:launched_at | 2025-02-10T10:03:20.000000 | 2025-02-10 10:06:42.185342 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-02-10 10:06:42.185357 | orchestrator | | accessIPv4 | | 2025-02-10 10:06:42.185373 | orchestrator | | accessIPv6 | | 2025-02-10 10:06:42.185388 | orchestrator | | addresses | auto_allocated_network=10.42.0.41, 192.168.112.192 | 2025-02-10 10:06:42.185410 | orchestrator | | config_drive | | 2025-02-10 10:06:42.185426 | orchestrator | | created | 2025-02-10T10:03:06Z | 2025-02-10 10:06:42.185446 | orchestrator | | description | None | 2025-02-10 10:06:42.185533 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-02-10 10:06:42.185551 | orchestrator | | hostId | 91800816d7aa73fa2817e7f098bfa9c3454b77f5cafa3b04881fcedd | 2025-02-10 10:06:42.185565 | orchestrator | | host_status | None | 2025-02-10 10:06:42.185580 | orchestrator | | id | 7b1ecc50-ad94-4392-847d-04f8d641f1da | 2025-02-10 10:06:42.185602 | orchestrator | | image | Cirros 0.6.2 (c7d8aa39-9168-47aa-a3da-7f093edb56ee) | 2025-02-10 10:06:42.185616 | orchestrator | | key_name | test | 2025-02-10 10:06:42.185630 | orchestrator | | locked | False | 2025-02-10 10:06:42.185644 | orchestrator | | locked_reason | None | 2025-02-10 10:06:42.185658 | orchestrator | | name | test-1 | 2025-02-10 10:06:42.185685 | orchestrator | | pinned_availability_zone | None | 2025-02-10 10:06:42.185701 | orchestrator | | progress | 0 | 2025-02-10 10:06:42.185715 | orchestrator | | project_id | f5047d9482a147609746840e312130f7 | 2025-02-10 10:06:42.185729 | orchestrator | | properties | hostname='test-1' | 2025-02-10 10:06:42.185744 | orchestrator | | security_groups | name='ssh' | 2025-02-10 10:06:42.185764 | orchestrator | | | name='icmp' | 2025-02-10 10:06:42.185778 | orchestrator | | server_groups | None | 2025-02-10 10:06:42.185792 | orchestrator | | status | ACTIVE | 2025-02-10 10:06:42.185806 | orchestrator | | tags | test | 2025-02-10 10:06:42.185820 | orchestrator | | trusted_image_certificates | None | 2025-02-10 10:06:42.185834 | orchestrator | | updated | 2025-02-10T10:05:22Z | 2025-02-10 10:06:42.185866 | orchestrator | | user_id | 903b03f6a9e6448a877fedde0c8a040f | 2025-02-10 10:06:42.190163 | orchestrator | | volumes_attached | | 2025-02-10 10:06:42.190224 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-02-10 10:06:42.561422 | orchestrator | + openstack --os-cloud test server show test-2 2025-02-10 10:06:46.147681 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-02-10 10:06:46.147823 | orchestrator | | Field | Value | 2025-02-10 10:06:46.147867 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-02-10 10:06:46.147883 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-02-10 10:06:46.147898 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-02-10 10:06:46.147912 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-02-10 10:06:46.147925 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-02-10 10:06:46.147958 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-02-10 10:06:46.147984 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-02-10 10:06:46.148002 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-02-10 10:06:46.148016 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-02-10 10:06:46.148045 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-02-10 10:06:46.148059 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-02-10 10:06:46.148083 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-02-10 10:06:46.148098 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-02-10 10:06:46.148112 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-02-10 10:06:46.148126 | orchestrator | | OS-EXT-STS:task_state | None | 2025-02-10 10:06:46.148140 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-02-10 10:06:46.148160 | orchestrator | | OS-SRV-USG:launched_at | 2025-02-10T10:03:55.000000 | 2025-02-10 10:06:46.148178 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-02-10 10:06:46.148193 | orchestrator | | accessIPv4 | | 2025-02-10 10:06:46.148209 | orchestrator | | accessIPv6 | | 2025-02-10 10:06:46.148239 | orchestrator | | addresses | auto_allocated_network=10.42.0.39, 192.168.112.141 | 2025-02-10 10:06:46.148261 | orchestrator | | config_drive | | 2025-02-10 10:06:46.148284 | orchestrator | | created | 2025-02-10T10:03:41Z | 2025-02-10 10:06:46.148300 | orchestrator | | description | None | 2025-02-10 10:06:46.148316 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-02-10 10:06:46.148331 | orchestrator | | hostId | 3a9ddc5eb8f9a23f7045c4521b058f6cf901577e21e3320b9937321d | 2025-02-10 10:06:46.148352 | orchestrator | | host_status | None | 2025-02-10 10:06:46.148368 | orchestrator | | id | 4c7f391b-6f68-4995-a1ef-4e9f85ef9336 | 2025-02-10 10:06:46.148383 | orchestrator | | image | Cirros 0.6.2 (c7d8aa39-9168-47aa-a3da-7f093edb56ee) | 2025-02-10 10:06:46.148399 | orchestrator | | key_name | test | 2025-02-10 10:06:46.148415 | orchestrator | | locked | False | 2025-02-10 10:06:46.148430 | orchestrator | | locked_reason | None | 2025-02-10 10:06:46.148452 | orchestrator | | name | test-2 | 2025-02-10 10:06:46.148493 | orchestrator | | pinned_availability_zone | None | 2025-02-10 10:06:46.148509 | orchestrator | | progress | 0 | 2025-02-10 10:06:46.148523 | orchestrator | | project_id | f5047d9482a147609746840e312130f7 | 2025-02-10 10:06:46.148537 | orchestrator | | properties | hostname='test-2' | 2025-02-10 10:06:46.148556 | orchestrator | | security_groups | name='ssh' | 2025-02-10 10:06:46.148570 | orchestrator | | | name='icmp' | 2025-02-10 10:06:46.148584 | orchestrator | | server_groups | None | 2025-02-10 10:06:46.148598 | orchestrator | | status | ACTIVE | 2025-02-10 10:06:46.148612 | orchestrator | | tags | test | 2025-02-10 10:06:46.148626 | orchestrator | | trusted_image_certificates | None | 2025-02-10 10:06:46.148647 | orchestrator | | updated | 2025-02-10T10:05:27Z | 2025-02-10 10:06:46.148666 | orchestrator | | user_id | 903b03f6a9e6448a877fedde0c8a040f | 2025-02-10 10:06:46.150384 | orchestrator | | volumes_attached | | 2025-02-10 10:06:46.150437 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-02-10 10:06:46.507702 | orchestrator | + openstack --os-cloud test server show test-3 2025-02-10 10:06:49.928331 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-02-10 10:06:49.928510 | orchestrator | | Field | Value | 2025-02-10 10:06:49.928545 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-02-10 10:06:49.928566 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-02-10 10:06:49.928580 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-02-10 10:06:49.928594 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-02-10 10:06:49.928608 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-02-10 10:06:49.928650 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-02-10 10:06:49.928665 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-02-10 10:06:49.928679 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-02-10 10:06:49.928693 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-02-10 10:06:49.928733 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-02-10 10:06:49.928750 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-02-10 10:06:49.928764 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-02-10 10:06:49.928778 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-02-10 10:06:49.928792 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-02-10 10:06:49.928806 | orchestrator | | OS-EXT-STS:task_state | None | 2025-02-10 10:06:49.928830 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-02-10 10:06:49.928846 | orchestrator | | OS-SRV-USG:launched_at | 2025-02-10T10:04:25.000000 | 2025-02-10 10:06:49.928862 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-02-10 10:06:49.928878 | orchestrator | | accessIPv4 | | 2025-02-10 10:06:49.928899 | orchestrator | | accessIPv6 | | 2025-02-10 10:06:49.928915 | orchestrator | | addresses | auto_allocated_network=10.42.0.7, 192.168.112.175 | 2025-02-10 10:06:49.928937 | orchestrator | | config_drive | | 2025-02-10 10:06:49.928955 | orchestrator | | created | 2025-02-10T10:04:17Z | 2025-02-10 10:06:49.928971 | orchestrator | | description | None | 2025-02-10 10:06:49.928986 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-02-10 10:06:49.929003 | orchestrator | | hostId | 3a9ddc5eb8f9a23f7045c4521b058f6cf901577e21e3320b9937321d | 2025-02-10 10:06:49.929025 | orchestrator | | host_status | None | 2025-02-10 10:06:49.929041 | orchestrator | | id | f417fda9-1976-4596-aba3-11c77dea8c72 | 2025-02-10 10:06:49.929056 | orchestrator | | image | Cirros 0.6.2 (c7d8aa39-9168-47aa-a3da-7f093edb56ee) | 2025-02-10 10:06:49.929072 | orchestrator | | key_name | test | 2025-02-10 10:06:49.929092 | orchestrator | | locked | False | 2025-02-10 10:06:49.929109 | orchestrator | | locked_reason | None | 2025-02-10 10:06:49.929125 | orchestrator | | name | test-3 | 2025-02-10 10:06:49.929146 | orchestrator | | pinned_availability_zone | None | 2025-02-10 10:06:49.929161 | orchestrator | | progress | 0 | 2025-02-10 10:06:49.929174 | orchestrator | | project_id | f5047d9482a147609746840e312130f7 | 2025-02-10 10:06:49.929188 | orchestrator | | properties | hostname='test-3' | 2025-02-10 10:06:49.929208 | orchestrator | | security_groups | name='ssh' | 2025-02-10 10:06:49.929222 | orchestrator | | | name='icmp' | 2025-02-10 10:06:49.929236 | orchestrator | | server_groups | None | 2025-02-10 10:06:49.929255 | orchestrator | | status | ACTIVE | 2025-02-10 10:06:49.929268 | orchestrator | | tags | test | 2025-02-10 10:06:49.929282 | orchestrator | | trusted_image_certificates | None | 2025-02-10 10:06:49.929296 | orchestrator | | updated | 2025-02-10T10:05:31Z | 2025-02-10 10:06:49.929315 | orchestrator | | user_id | 903b03f6a9e6448a877fedde0c8a040f | 2025-02-10 10:06:49.930652 | orchestrator | | volumes_attached | | 2025-02-10 10:06:49.930692 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-02-10 10:06:50.316062 | orchestrator | + openstack --os-cloud test server show test-4 2025-02-10 10:06:53.925255 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-02-10 10:06:53.925445 | orchestrator | | Field | Value | 2025-02-10 10:06:53.925511 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-02-10 10:06:53.925536 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-02-10 10:06:53.925576 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-02-10 10:06:53.925599 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-02-10 10:06:53.925621 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-02-10 10:06:53.925642 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-02-10 10:06:53.925662 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-02-10 10:06:53.925684 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-02-10 10:06:53.925706 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-02-10 10:06:53.925757 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-02-10 10:06:53.925780 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-02-10 10:06:53.925801 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-02-10 10:06:53.925829 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-02-10 10:06:53.925851 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-02-10 10:06:53.925874 | orchestrator | | OS-EXT-STS:task_state | None | 2025-02-10 10:06:53.925895 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-02-10 10:06:53.925917 | orchestrator | | OS-SRV-USG:launched_at | 2025-02-10T10:04:56.000000 | 2025-02-10 10:06:53.925941 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-02-10 10:06:53.925963 | orchestrator | | accessIPv4 | | 2025-02-10 10:06:53.925985 | orchestrator | | accessIPv6 | | 2025-02-10 10:06:53.926082 | orchestrator | | addresses | auto_allocated_network=10.42.0.36, 192.168.112.193 | 2025-02-10 10:06:53.926121 | orchestrator | | config_drive | | 2025-02-10 10:06:53.926143 | orchestrator | | created | 2025-02-10T10:04:48Z | 2025-02-10 10:06:53.926167 | orchestrator | | description | None | 2025-02-10 10:06:53.926180 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-02-10 10:06:53.926193 | orchestrator | | hostId | 91800816d7aa73fa2817e7f098bfa9c3454b77f5cafa3b04881fcedd | 2025-02-10 10:06:53.926205 | orchestrator | | host_status | None | 2025-02-10 10:06:53.926218 | orchestrator | | id | 0ba45d81-1c05-49fd-82b1-b5bc4ff7a5ae | 2025-02-10 10:06:53.926230 | orchestrator | | image | Cirros 0.6.2 (c7d8aa39-9168-47aa-a3da-7f093edb56ee) | 2025-02-10 10:06:53.926242 | orchestrator | | key_name | test | 2025-02-10 10:06:53.926254 | orchestrator | | locked | False | 2025-02-10 10:06:53.926274 | orchestrator | | locked_reason | None | 2025-02-10 10:06:53.926286 | orchestrator | | name | test-4 | 2025-02-10 10:06:53.926309 | orchestrator | | pinned_availability_zone | None | 2025-02-10 10:06:53.926322 | orchestrator | | progress | 0 | 2025-02-10 10:06:53.926335 | orchestrator | | project_id | f5047d9482a147609746840e312130f7 | 2025-02-10 10:06:53.926348 | orchestrator | | properties | hostname='test-4' | 2025-02-10 10:06:53.926360 | orchestrator | | security_groups | name='ssh' | 2025-02-10 10:06:53.926373 | orchestrator | | | name='icmp' | 2025-02-10 10:06:53.926385 | orchestrator | | server_groups | None | 2025-02-10 10:06:53.926398 | orchestrator | | status | ACTIVE | 2025-02-10 10:06:53.926410 | orchestrator | | tags | test | 2025-02-10 10:06:53.926436 | orchestrator | | trusted_image_certificates | None | 2025-02-10 10:06:53.926480 | orchestrator | | updated | 2025-02-10T10:05:36Z | 2025-02-10 10:06:53.926517 | orchestrator | | user_id | 903b03f6a9e6448a877fedde0c8a040f | 2025-02-10 10:06:54.291174 | orchestrator | | volumes_attached | | 2025-02-10 10:06:54.291321 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-02-10 10:06:54.291360 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-02-10 10:06:54.439700 | orchestrator | changed 2025-02-10 10:06:54.497671 | 2025-02-10 10:06:54.497872 | TASK [Run tempest] 2025-02-10 10:06:54.609314 | orchestrator | skipping: Conditional result was False 2025-02-10 10:06:54.628284 | 2025-02-10 10:06:54.628457 | TASK [Check prometheus alert status] 2025-02-10 10:06:54.728529 | orchestrator | skipping: Conditional result was False 2025-02-10 10:06:54.773244 | 2025-02-10 10:06:54.773359 | PLAY RECAP 2025-02-10 10:06:54.773416 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-02-10 10:06:54.773442 | 2025-02-10 10:06:55.038720 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-02-10 10:06:55.047475 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-02-10 10:06:55.768790 | 2025-02-10 10:06:55.768963 | PLAY [Post output play] 2025-02-10 10:06:55.798999 | 2025-02-10 10:06:55.799153 | LOOP [stage-output : Register sources] 2025-02-10 10:06:55.884903 | 2025-02-10 10:06:55.885199 | TASK [stage-output : Check sudo] 2025-02-10 10:06:56.615783 | orchestrator | sudo: a password is required 2025-02-10 10:06:56.930577 | orchestrator | ok: Runtime: 0:00:00.012323 2025-02-10 10:06:56.949266 | 2025-02-10 10:06:56.949408 | LOOP [stage-output : Set source and destination for files and folders] 2025-02-10 10:06:57.003167 | 2025-02-10 10:06:57.003442 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-02-10 10:06:57.096797 | orchestrator | ok 2025-02-10 10:06:57.107454 | 2025-02-10 10:06:57.107580 | LOOP [stage-output : Ensure target folders exist] 2025-02-10 10:06:57.622367 | orchestrator | ok: "docs" 2025-02-10 10:06:57.622683 | 2025-02-10 10:06:57.900137 | orchestrator | ok: "artifacts" 2025-02-10 10:06:58.145581 | orchestrator | ok: "logs" 2025-02-10 10:06:58.175337 | 2025-02-10 10:06:58.175548 | LOOP [stage-output : Copy files and folders to staging folder] 2025-02-10 10:06:58.220772 | 2025-02-10 10:06:58.221046 | TASK [stage-output : Make all log files readable] 2025-02-10 10:06:58.528489 | orchestrator | ok 2025-02-10 10:06:58.538888 | 2025-02-10 10:06:58.539021 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-02-10 10:06:58.615556 | orchestrator | skipping: Conditional result was False 2025-02-10 10:06:58.633175 | 2025-02-10 10:06:58.633334 | TASK [stage-output : Discover log files for compression] 2025-02-10 10:06:58.659955 | orchestrator | skipping: Conditional result was False 2025-02-10 10:06:58.676709 | 2025-02-10 10:06:58.677016 | LOOP [stage-output : Archive everything from logs] 2025-02-10 10:06:58.763365 | 2025-02-10 10:06:58.763538 | PLAY [Post cleanup play] 2025-02-10 10:06:58.787575 | 2025-02-10 10:06:58.787700 | TASK [Set cloud fact (Zuul deployment)] 2025-02-10 10:06:58.857982 | orchestrator | ok 2025-02-10 10:06:58.869699 | 2025-02-10 10:06:58.869871 | TASK [Set cloud fact (local deployment)] 2025-02-10 10:06:58.904653 | orchestrator | skipping: Conditional result was False 2025-02-10 10:06:58.920691 | 2025-02-10 10:06:58.920854 | TASK [Clean the cloud environment] 2025-02-10 10:06:59.789480 | orchestrator | 2025-02-10 10:06:59 - clean up servers 2025-02-10 10:07:00.654968 | orchestrator | 2025-02-10 10:07:00 - testbed-manager 2025-02-10 10:07:00.745328 | orchestrator | 2025-02-10 10:07:00 - testbed-node-4 2025-02-10 10:07:00.839679 | orchestrator | 2025-02-10 10:07:00 - testbed-node-2 2025-02-10 10:07:00.934946 | orchestrator | 2025-02-10 10:07:00 - testbed-node-0 2025-02-10 10:07:01.037563 | orchestrator | 2025-02-10 10:07:01 - testbed-node-1 2025-02-10 10:07:01.140831 | orchestrator | 2025-02-10 10:07:01 - testbed-node-3 2025-02-10 10:07:01.239345 | orchestrator | 2025-02-10 10:07:01 - testbed-node-5 2025-02-10 10:07:01.328383 | orchestrator | 2025-02-10 10:07:01 - clean up keypairs 2025-02-10 10:07:01.346828 | orchestrator | 2025-02-10 10:07:01 - testbed 2025-02-10 10:07:01.377864 | orchestrator | 2025-02-10 10:07:01 - wait for servers to be gone 2025-02-10 10:07:12.597930 | orchestrator | 2025-02-10 10:07:12 - clean up ports 2025-02-10 10:07:12.825843 | orchestrator | 2025-02-10 10:07:12 - 052af1b3-7772-46d7-a098-54b45fd10c0f 2025-02-10 10:07:13.023793 | orchestrator | 2025-02-10 10:07:13 - 26650198-2949-468b-b7ec-7e29631fdf11 2025-02-10 10:07:13.385567 | orchestrator | 2025-02-10 10:07:13 - 3956339e-3590-4230-989a-cc262218aa50 2025-02-10 10:07:13.580442 | orchestrator | 2025-02-10 10:07:13 - 6a11824b-a7fb-46d8-901f-38995b84583d 2025-02-10 10:07:13.780494 | orchestrator | 2025-02-10 10:07:13 - b122ba1f-4eee-461c-b93f-4f26810da0fe 2025-02-10 10:07:13.994702 | orchestrator | 2025-02-10 10:07:13 - e1c56074-3714-44ab-84b3-4f24069b590e 2025-02-10 10:07:14.202638 | orchestrator | 2025-02-10 10:07:14 - e285733c-df78-4f52-93ae-d47fffac168d 2025-02-10 10:07:14.392743 | orchestrator | 2025-02-10 10:07:14 - clean up volumes 2025-02-10 10:07:14.540231 | orchestrator | 2025-02-10 10:07:14 - testbed-volume-3-node-base 2025-02-10 10:07:14.581280 | orchestrator | 2025-02-10 10:07:14 - testbed-volume-5-node-base 2025-02-10 10:07:14.622578 | orchestrator | 2025-02-10 10:07:14 - testbed-volume-0-node-base 2025-02-10 10:07:14.660282 | orchestrator | 2025-02-10 10:07:14 - testbed-volume-4-node-base 2025-02-10 10:07:14.700591 | orchestrator | 2025-02-10 10:07:14 - testbed-volume-1-node-base 2025-02-10 10:07:14.743954 | orchestrator | 2025-02-10 10:07:14 - testbed-volume-2-node-base 2025-02-10 10:07:14.786172 | orchestrator | 2025-02-10 10:07:14 - testbed-volume-manager-base 2025-02-10 10:07:14.826898 | orchestrator | 2025-02-10 10:07:14 - testbed-volume-7-node-1 2025-02-10 10:07:14.869763 | orchestrator | 2025-02-10 10:07:14 - testbed-volume-12-node-0 2025-02-10 10:07:14.910599 | orchestrator | 2025-02-10 10:07:14 - testbed-volume-11-node-5 2025-02-10 10:07:14.955305 | orchestrator | 2025-02-10 10:07:14 - testbed-volume-17-node-5 2025-02-10 10:07:14.999962 | orchestrator | 2025-02-10 10:07:14 - testbed-volume-14-node-2 2025-02-10 10:07:15.049240 | orchestrator | 2025-02-10 10:07:15 - testbed-volume-10-node-4 2025-02-10 10:07:15.094998 | orchestrator | 2025-02-10 10:07:15 - testbed-volume-3-node-3 2025-02-10 10:07:15.135278 | orchestrator | 2025-02-10 10:07:15 - testbed-volume-15-node-3 2025-02-10 10:07:15.177708 | orchestrator | 2025-02-10 10:07:15 - testbed-volume-4-node-4 2025-02-10 10:07:15.221635 | orchestrator | 2025-02-10 10:07:15 - testbed-volume-1-node-1 2025-02-10 10:07:15.262589 | orchestrator | 2025-02-10 10:07:15 - testbed-volume-5-node-5 2025-02-10 10:07:15.304861 | orchestrator | 2025-02-10 10:07:15 - testbed-volume-0-node-0 2025-02-10 10:07:15.347218 | orchestrator | 2025-02-10 10:07:15 - testbed-volume-13-node-1 2025-02-10 10:07:15.396602 | orchestrator | 2025-02-10 10:07:15 - testbed-volume-2-node-2 2025-02-10 10:07:15.443135 | orchestrator | 2025-02-10 10:07:15 - testbed-volume-8-node-2 2025-02-10 10:07:15.487365 | orchestrator | 2025-02-10 10:07:15 - testbed-volume-6-node-0 2025-02-10 10:07:15.526766 | orchestrator | 2025-02-10 10:07:15 - testbed-volume-9-node-3 2025-02-10 10:07:15.575860 | orchestrator | 2025-02-10 10:07:15 - testbed-volume-16-node-4 2025-02-10 10:07:15.617546 | orchestrator | 2025-02-10 10:07:15 - disconnect routers 2025-02-10 10:07:15.671960 | orchestrator | 2025-02-10 10:07:15 - testbed 2025-02-10 10:07:16.268071 | orchestrator | 2025-02-10 10:07:16 - clean up subnets 2025-02-10 10:07:16.301316 | orchestrator | 2025-02-10 10:07:16 - subnet-testbed-management 2025-02-10 10:07:16.432219 | orchestrator | 2025-02-10 10:07:16 - clean up networks 2025-02-10 10:07:16.588181 | orchestrator | 2025-02-10 10:07:16 - net-testbed-management 2025-02-10 10:07:16.859081 | orchestrator | 2025-02-10 10:07:16 - clean up security groups 2025-02-10 10:07:16.894263 | orchestrator | 2025-02-10 10:07:16 - testbed-node 2025-02-10 10:07:16.976105 | orchestrator | 2025-02-10 10:07:16 - testbed-management 2025-02-10 10:07:17.080929 | orchestrator | 2025-02-10 10:07:17 - clean up floating ips 2025-02-10 10:07:17.942381 | orchestrator | 2025-02-10 10:07:17 - 81.163.193.162 2025-02-10 10:07:18.337884 | orchestrator | 2025-02-10 10:07:18 - clean up routers 2025-02-10 10:07:18.382766 | orchestrator | 2025-02-10 10:07:18 - testbed 2025-02-10 10:07:19.064892 | orchestrator | changed 2025-02-10 10:07:19.102561 | 2025-02-10 10:07:19.102811 | PLAY RECAP 2025-02-10 10:07:19.102965 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-02-10 10:07:19.103044 | 2025-02-10 10:07:19.243196 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-02-10 10:07:19.250367 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-02-10 10:07:19.975515 | 2025-02-10 10:07:19.975682 | PLAY [Base post-fetch] 2025-02-10 10:07:20.006013 | 2025-02-10 10:07:20.006154 | TASK [fetch-output : Set log path for multiple nodes] 2025-02-10 10:07:20.073200 | orchestrator | skipping: Conditional result was False 2025-02-10 10:07:20.081615 | 2025-02-10 10:07:20.081777 | TASK [fetch-output : Set log path for single node] 2025-02-10 10:07:20.137291 | orchestrator | ok 2025-02-10 10:07:20.147817 | 2025-02-10 10:07:20.147947 | LOOP [fetch-output : Ensure local output dirs] 2025-02-10 10:07:20.649411 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/0ea8ccc664f545108af0c9bbdc49281a/work/logs" 2025-02-10 10:07:20.926527 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/0ea8ccc664f545108af0c9bbdc49281a/work/artifacts" 2025-02-10 10:07:21.216754 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/0ea8ccc664f545108af0c9bbdc49281a/work/docs" 2025-02-10 10:07:21.229853 | 2025-02-10 10:07:21.229980 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-02-10 10:07:22.053682 | orchestrator | changed: .d..t...... ./ 2025-02-10 10:07:22.054012 | orchestrator | changed: All items complete 2025-02-10 10:07:22.054051 | 2025-02-10 10:07:22.640560 | orchestrator | changed: .d..t...... ./ 2025-02-10 10:07:23.207979 | orchestrator | changed: .d..t...... ./ 2025-02-10 10:07:23.233427 | 2025-02-10 10:07:23.233570 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-02-10 10:07:23.269362 | orchestrator | skipping: Conditional result was False 2025-02-10 10:07:23.279264 | orchestrator | skipping: Conditional result was False 2025-02-10 10:07:23.330274 | 2025-02-10 10:07:23.330381 | PLAY RECAP 2025-02-10 10:07:23.330450 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-02-10 10:07:23.330479 | 2025-02-10 10:07:23.454130 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-02-10 10:07:23.462459 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-02-10 10:07:24.151912 | 2025-02-10 10:07:24.152066 | PLAY [Base post] 2025-02-10 10:07:24.180905 | 2025-02-10 10:07:24.181042 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-02-10 10:07:25.267173 | orchestrator | changed 2025-02-10 10:07:25.306801 | 2025-02-10 10:07:25.306929 | PLAY RECAP 2025-02-10 10:07:25.306997 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-02-10 10:07:25.307060 | 2025-02-10 10:07:25.419740 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-02-10 10:07:25.423106 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-02-10 10:07:26.168322 | 2025-02-10 10:07:26.168521 | PLAY [Base post-logs] 2025-02-10 10:07:26.186659 | 2025-02-10 10:07:26.186841 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-02-10 10:07:26.650285 | localhost | changed 2025-02-10 10:07:26.657200 | 2025-02-10 10:07:26.657381 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-02-10 10:07:26.690966 | localhost | ok 2025-02-10 10:07:26.700939 | 2025-02-10 10:07:26.701088 | TASK [Set zuul-log-path fact] 2025-02-10 10:07:26.732088 | localhost | ok 2025-02-10 10:07:26.749560 | 2025-02-10 10:07:26.749692 | TASK [set-zuul-log-path-fact : Set log path for a change] 2025-02-10 10:07:26.786228 | localhost | skipping: Conditional result was False 2025-02-10 10:07:26.790424 | 2025-02-10 10:07:26.790551 | TASK [set-zuul-log-path-fact : Set log path for a ref update] 2025-02-10 10:07:26.828069 | localhost | ok 2025-02-10 10:07:26.831703 | 2025-02-10 10:07:26.831838 | TASK [set-zuul-log-path-fact : Set log path for a periodic job] 2025-02-10 10:07:26.877293 | localhost | skipping: Conditional result was False 2025-02-10 10:07:26.885264 | 2025-02-10 10:07:26.885459 | TASK [set-zuul-log-path-fact : Set log path for a change] 2025-02-10 10:07:26.902805 | localhost | skipping: Conditional result was False 2025-02-10 10:07:26.910437 | 2025-02-10 10:07:26.910610 | TASK [set-zuul-log-path-fact : Set log path for a ref update] 2025-02-10 10:07:26.936325 | localhost | skipping: Conditional result was False 2025-02-10 10:07:26.943739 | 2025-02-10 10:07:26.943894 | TASK [set-zuul-log-path-fact : Set log path for a periodic job] 2025-02-10 10:07:26.970020 | localhost | skipping: Conditional result was False 2025-02-10 10:07:26.980283 | 2025-02-10 10:07:26.980463 | TASK [upload-logs : Create log directories] 2025-02-10 10:07:27.497163 | localhost | changed 2025-02-10 10:07:27.507155 | 2025-02-10 10:07:27.507336 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-02-10 10:07:28.023356 | localhost -> localhost | ok: Runtime: 0:00:00.007689 2025-02-10 10:07:28.028819 | 2025-02-10 10:07:28.028948 | TASK [upload-logs : Upload logs to log server] 2025-02-10 10:07:28.618970 | localhost | Output suppressed because no_log was given 2025-02-10 10:07:28.625917 | 2025-02-10 10:07:28.626086 | LOOP [upload-logs : Compress console log and json output] 2025-02-10 10:07:28.705924 | localhost | skipping: Conditional result was False 2025-02-10 10:07:28.724567 | localhost | skipping: Conditional result was False 2025-02-10 10:07:28.738023 | 2025-02-10 10:07:28.738295 | LOOP [upload-logs : Upload compressed console log and json output] 2025-02-10 10:07:28.804187 | localhost | skipping: Conditional result was False 2025-02-10 10:07:28.804921 | 2025-02-10 10:07:28.816933 | localhost | skipping: Conditional result was False 2025-02-10 10:07:28.828358 | 2025-02-10 10:07:28.828625 | LOOP [upload-logs : Upload console log and json output]